Also, since I lost the context of the other part of the post and you cant edit posts on mastodon, I think I should clarify that by "needs a Kubernetes" I don't mean "needs a big complex thing that takes millions of man hours to create" or "Needs a docker-based distributed clustering/scheduling system"
If you've never worked with Kubernetes you might not know this, but Kubernetes itself isn't actually an implementation, it's just a bunch of interfaces that define standard ways that all the parts of said distributed clustering/scheduling system can work together. What people colloquially refer to as "Kubernetes" is actually the interfaces + probably the reference implementation of each one of the interchangeable parts.
But the magic is that you can swap that parts out with your own if you want. You can upgrade 1 part without breaking the others. You can have a proliferation of the "flavors" of Kubernetes similar to the proliferation of linux distributions.
I just think that we as "small web" developers should be mindful of this trend tech has followed since its inception, since the unix days -- small, simple programs that can work together.
I imagine a "small tech kubernetes" as a set of interfaces that all of our projects can conform to so they can interoperate and proliferate. So other developers can take them and adapt them to other use cases without losing as much interoperability .
explanation of what I meant by "secure-attestation-based" at the end:
The first part talked a lot about servers and ownership over processes & how its related to power, having power over other people.. About how in academic Computer Science right now, no one has figured out how to make a process that operates directly on data but does not "own" that data (can censor, falsify, or spy on it).
Also mentioned how DRM today uses an ersatz solution for this called CPU secure enclaves and secure remote attestation. And how some projects (Signal's secure contact discovery) have started using the same tech to try to liberate people, but I have a lot of doubts about how viable this ersatz solution is in the long term / how viable it is for more widespread use.
For what its worth, here's the slightly edited second half which I didn't lose:
This was responding to the room's consensus regarding "inside-out design (architechture first, UI later) / trying to please everyone being a fool's errand":
Maybe, but I isn't the opposite also a fool's errand? do think that the "outside in" design (make the UI first, then decide how the platform should work to support this UI) & attempts to reject complexity may end up lonely. I think the history of small tech has primarily been a history of failure (at least when you look at it in the grand scheme of things globally), not just because of poor UX, but also because of technical fragmentation. Our predecessors burned bright and created many, many wonderful things. But how many of those things are still used today? How many more dead projects do we need?
https://en.wikipedia.org/wiki/Comparison_of_software_and_protocols_for_distributed_social_networking
Reminds me of XKCD's "Standards" https://xkcd.com/927/
I think small tech needs coalescence more than anything. There's a reason why most people settled on using GNU+Linux for servers... Now the corporate world is settling on Kubernetes as well, for good reason. Building/deploying/operating software on Kubernetes is easier for them, and it's easier to train/learn/hire for as well. What can **we** settle on? I think small tech needs a Kubernetes of its own, but designed for the small-tech use case & with a much better user experience. Right now we have about 6 or 7 competing projects; nextcloud, syncloud, yunohost, sandstorm.io, Site.js/small-web.org/Basil, etc. None of the parts of any of them are designed to be interoperable or interchangeable. What happens when one of these projects stops being maintained? What if I start using one of them, but then I really want a killer app or feature that's only available on
the other?
I believe that solving this kind of problem does require inside-out design. There are unique challenges and technical constraints associated with shoe-horning as much user ownership as possible into the digital everyday (cloud services, ISP-owned home routers, NATs, smart TVs, shared WiFi, etc) which we inhabit. There may be many different ways to do it, but I would like to believe it's possible to define standards, interfaces, etc which cover all the possible use cases while maintaining interoperability. Technologists have been doing this kind of thing for decades... At least IMO, all the tech that declined to coalesce around interoperable standard is dead or dying.
I also sorta dis-agree with Aral that "popularity / scaling is the way to the dark side", although its probably just semantics.
At some point we will have to scale small-tech. Not just scaling to millions individual user-owners, but also building ways for individuals' sites, data, and processes to grow, to become highly available, withstanding natural disasters, government repression, hell, maybe even the viral "hug of death" effect associated with reaching the front page of an aggregator like reddit or trending all across the future fediverse. Probably p2p, secure-attestation-based "distributed cloudfront" or something similar will have to become involved at this point.
This may not be happening yet, but I'd rather not plan for failure. I don't want to end up completely re-architecting my systems to accommodate a future where we succeed.
Also shout out to @f0x for trying to recover it for me, turns out mastodon actually deletes when the user asks it to delete. Horray for humane tech!
Enjoyed watching @laura and @aral's "Small is Beautiful" show yesterday featuring @gabek of Owncast fame and @heydon the web accessibility expert behind Webbed Briefs https://briefs.video
Wrote a huge effortpost (so big it took up 2 toots) trying to respond to everything that was discussed, then promptly messed it up, got trolled by the mastodon threads / "delete and redraft" feature and accidentally deleted the wrong post, permanently losing the data. Oops. Still learning how to use mastodon properly.
I got my new Greenhouse desktop application working this week; it can publish a local listening port or a folder full of files to the internet via the greenhouse cloud service. Check out the screencasts I posted on my blog!
https://sequentialread.com/greenhouse-development-update-may/
Last month I posted about my progress developing my new cloud provider, greenhouse.
https://sequentialread.com/greenhouse-development-update-april/
I'm looking forward to posting an update for the month of May as soon as I get home! I did a demo yesterday where I was able to get a reverse-proxy and static file server online simply by clicking around in the Greenhouse desktop application :)
@gabek hey, do you have a link to that owncast webhook stuff you were talking about ? I was looking for it on github and I can't find it.
One of the biggest problems I have with my current polling-based approach (https://git.beta.sequentialread.com/forest/sequentialread-stream/src/branch/master/facecam/index.html#L153): when I get the list of current viewers, the JSON object for each viewer doesn't have their name (the name it displays in the top right) until they send their first chat message.
Can you think of a way to get that username right away when someone joins, or would it require changes to owncast ?
I just had an idea when I woke up this morning, I was thinking about how owncast can be configured to upload the HLS segments to object storage. What if the entire app, including all of the user facing static content & user facing api responses could be uploaded too? Then the only thing missing would be the chat, and I know there would be ways to fix that.
Another idea I had was what if owncast could be run in "frontend" and "backend" mode, when its run in frontend mode its just a static file server + the chat, when its run in "backend mode" it's just encoding HLS segments and uploading them to the frontend.
@gabek according to caniuse this browser feature is on 92% of all clients today: https://caniuse.com/es6-module
@gabek Hey I did some 1st impression load time tests with owncast frontend app, testing pre-loading the JS modules via <script type="module" src="..."> tags. The 1st test with HTTP/1.1 took about 10 seconds regardless of whether the script tags were there or not:
https://picopublish.sequentialread.com/files/owncast-first-impression-http-1.1.mkv
But watch what happens when you turn on HTTP/2:
https://picopublish.sequentialread.com/files/owncast-first-impression-http-2.mkv
I am a web technologist who is interested in supporting and building enjoyable ways for individuals, organizations, and communities to set up and maintain their own server infrastructure, including the hardware part.
I am currently working full time as an SRE 😫, but I am also heavily involved with Cyberia Computer Club and Layer Zero