Show newer

@amolith@nixnet.social

I hear you, but isn't a docker-compose.yml file plus a pile of Dockerfiles just a codified expression of "how do I set up this app" ?

The same could be said of a collection of ansible roles, .deb/.rpm packages, or raw scripts. Typically one can port/translate between those paradigms easier than one can learn how to set up the app from scratch, i.e. by reading the application code.

No matter which one a developer picks, if they pick just one, it's guaranteed to alienate a portion of their audience. I think folks go for docker because it alienates **a smaller number of people** on average.

There are a lot of reasons why docker got so popular; a lot of it has to do with how hard the "proper" system admin stuff really is. No one ever did the hard work to try to make unix admin easier and more normie-friendly until docker. Hate the game, not the player.

That said, obviously in an ideal world, all open source developers could be well-supported enough to maintain at least a couple different supported installation methods.

@fribbledom ive started just removing the cookie popups with my ad blocker 🙃

@gabek Thanks for this BTW, I put a disclaimer saying that what I did is definitely not required to be able to use Owncast, it was just for fun.

@gabek Mali-T628 MP6

I believe that the thing I'm using can in fact do real time video transcoding, even on the CPU (8 cores baybeee) But I wanted to experiment with something where it only gets encoded once -- something that would work even when transcoding on the server is not an option.

@gabek Yes, using a torrent-ish distribution mechanism would 100% introduce tons of stream latency. I think that's a price many folks would be willing to pay, however, if the alternative is what you mentioned, paying for / giving your data to a CDN.

I am interested in it precisely **because** it is hard / no good solution for it exists today, not despite that 😈

I saw the comment on the owncast issue stating that peertube already added p2p live streaming -- this is simply not true! PeerTube is using HLS just like owncast is. It is 100% handled by the server.

@gabek Thanks. Yeah I really like how HLS is simply based on HTTP & works well with existing web servers. I'm not trying to push this as a new feature for owncast really, I just wanted to upgrade my current jank solution (for streaming from a potato web server) with a slightly nicer jank solution 😀

Next up on my list would be some sort of p2p acceleration so we can stream to 1000s of viewers

@gabek

I saw you added features for controlling the latency -- have you tried to optimize for the smallest stream delay ? I'm curious if what I've been doing here is even worth anything in that department. I was hoping I could get it to be even faster, but it sounds like the way HLS works sorta limits it fundamentally.

I was able to modify owncast to stream HLS segments that are outputted by OBS directly instead of re-encoding the video on the server -- got it down to 10 seconds stream delay in my experimental test!

its saaaaaturday

drinking beer and messing around with owncast code, direct HLS streaming from OBS any% attempts

stream.sequentialread.com/

@gabek

Have you ever looked at doing this with Owncast?

obsproject.com/forum/resources

OBS stream direct to HLS

If you haven't given it a shot yet, I'd love to tinker with this!

Also, since I lost the context of the other part of the post and you cant edit posts on mastodon, I think I should clarify that by "needs a Kubernetes" I don't mean "needs a big complex thing that takes millions of man hours to create" or "Needs a docker-based distributed clustering/scheduling system"

If you've never worked with Kubernetes you might not know this, but Kubernetes itself isn't actually an implementation, it's just a bunch of interfaces that define standard ways that all the parts of said distributed clustering/scheduling system can work together. What people colloquially refer to as "Kubernetes" is actually the interfaces + probably the reference implementation of each one of the interchangeable parts.

But the magic is that you can swap that parts out with your own if you want. You can upgrade 1 part without breaking the others. You can have a proliferation of the "flavors" of Kubernetes similar to the proliferation of linux distributions.

I just think that we as "small web" developers should be mindful of this trend tech has followed since its inception, since the unix days -- small, simple programs that can work together.

I imagine a "small tech kubernetes" as a set of interfaces that all of our projects can conform to so they can interoperate and proliferate. So other developers can take them and adapt them to other use cases without losing as much interoperability .

Show thread

explanation of what I meant by "secure-attestation-based" at the end:

The first part talked a lot about servers and ownership over processes & how its related to power, having power over other people.. About how in academic Computer Science right now, no one has figured out how to make a process that operates directly on data but does not "own" that data (can censor, falsify, or spy on it).

Also mentioned how DRM today uses an ersatz solution for this called CPU secure enclaves and secure remote attestation. And how some projects (Signal's secure contact discovery) have started using the same tech to try to liberate people, but I have a lot of doubts about how viable this ersatz solution is in the long term / how viable it is for more widespread use.

Show thread

For what its worth, here's the slightly edited second half which I didn't lose:

This was responding to the room's consensus regarding "inside-out design (architechture first, UI later) / trying to please everyone being a fool's errand":

Maybe, but I isn't the opposite also a fool's errand? do think that the "outside in" design (make the UI first, then decide how the platform should work to support this UI) & attempts to reject complexity may end up lonely. I think the history of small tech has primarily been a history of failure (at least when you look at it in the grand scheme of things globally), not just because of poor UX, but also because of technical fragmentation. Our predecessors burned bright and created many, many wonderful things. But how many of those things are still used today? How many more dead projects do we need?
en.wikipedia.org/wiki/Comparis

Reminds me of XKCD's "Standards" xkcd.com/927/

I think small tech needs coalescence more than anything. There's a reason why most people settled on using GNU+Linux for servers... Now the corporate world is settling on Kubernetes as well, for good reason. Building/deploying/operating software on Kubernetes is easier for them, and it's easier to train/learn/hire for as well. What can **we** settle on? I think small tech needs a Kubernetes of its own, but designed for the small-tech use case & with a much better user experience. Right now we have about 6 or 7 competing projects; nextcloud, syncloud, yunohost, sandstorm.io, Site.js/small-web.org/Basil, etc. None of the parts of any of them are designed to be interoperable or interchangeable. What happens when one of these projects stops being maintained? What if I start using one of them, but then I really want a killer app or feature that's only available on
the other?

I believe that solving this kind of problem does require inside-out design. There are unique challenges and technical constraints associated with shoe-horning as much user ownership as possible into the digital everyday (cloud services, ISP-owned home routers, NATs, smart TVs, shared WiFi, etc) which we inhabit. There may be many different ways to do it, but I would like to believe it's possible to define standards, interfaces, etc which cover all the possible use cases while maintaining interoperability. Technologists have been doing this kind of thing for decades... At least IMO, all the tech that declined to coalesce around interoperable standard is dead or dying.

I also sorta dis-agree with Aral that "popularity / scaling is the way to the dark side", although its probably just semantics.

At some point we will have to scale small-tech. Not just scaling to millions individual user-owners, but also building ways for individuals' sites, data, and processes to grow, to become highly available, withstanding natural disasters, government repression, hell, maybe even the viral "hug of death" effect associated with reaching the front page of an aggregator like reddit or trending all across the future fediverse. Probably p2p, secure-attestation-based "distributed cloudfront" or something similar will have to become involved at this point.

This may not be happening yet, but I'd rather not plan for failure. I don't want to end up completely re-architecting my systems to accommodate a future where we succeed.

Show thread

Also shout out to @f0x for trying to recover it for me, turns out mastodon actually deletes when the user asks it to delete. Horray for humane tech!

Show thread

Enjoyed watching @laura and @aral's "Small is Beautiful" show yesterday featuring @gabek of Owncast fame and @heydon the web accessibility expert behind Webbed Briefs briefs.video

Wrote a huge effortpost (so big it took up 2 toots) trying to respond to everything that was discussed, then promptly messed it up, got trolled by the mastodon threads / "delete and redraft" feature and accidentally deleted the wrong post, permanently losing the data. Oops. Still learning how to use mastodon properly. :gnomed:

@michael Ok, I got it fixed.

It looks like the windows html5 video player implementation disables seeking if the http server serving the video does not support http byte range requests. So I added that to my crappy file upload app and now it works on windows.

@michael I've updated it with h.264 and a much more detailed description for each video 🧡

@michael Thanks for the feedback, it really helps me to know when I miss.

yikes, non-seekable video, definitely switching back to H264 as the default.

At very least for now I can add a summary at the top describing what greenhouse does and why I'm building it, plus link to another post expanding on that.

In the future I plan on doing much more professional presentation for this stuff, for now I'm focused on getting the software put together and ready to go out the door with an "alpha" sticker on it.

Show older
Pixietown

Small server part of the pixie.town infrastructure. Registration is closed.