Show newer

These days pretty much every modern web server framework or library is based on the same non-blocking IO primitives that `nginx` invented. But Mastodon is still lagging behind on thread pools where each thread blocks while it's waiting for the remote client or server.

Mastodon doesn't fork off a new process or spawn a new thread for every request, but it's darn close to it.

How did the web evolve past this scalability challenge? It didn't necessarily involve buying a faster computer. The developers of the venerable `nginx` web server famously struck first blood when they cracked what they called the "c10k" problem for the first time. (handling 10 thousand simultaneous connections to the same server application).

This happened in the early 2000s, and the nginx server in question was consuming only about 2.5MB of RAM during the load test.

This style of client and server application has its roots in things like `inetd` (internet daemon) and CGI (common gateway interface). Benno Rice explains in a section of his excellent presentation covering the history of linux and unix:

video.strongthany.cc/watch?v=o

> [Then things changed...] the internet happened. That inetd model was great when [you were dealing with a small amount of stuff going on], like, [only a few users would have telnet connections] ...The web looked like it would work that way too, and then it became really really popular. And so you end up with situations where forking off a process to handle every single connection doesn't really scale that well.

Many Mastodon server users and admins have mentioned that the load from all the new users is causing a strain on the system -- large outbound queues, delays on messages, slow page load times, etc.

The good news is that these problems don't have to be solved by buying a more powerful computer.

The Mastodon software uses an old (circa 90s and earlier) way of organizing its code, which I like to call "one-thread-per-request with blocking IO"

One of them is GoToSocial, which I see as a dark horse poised to surpass Mastodon and become the best general purpose Fediverse server implementation.

nlnet.nl/project/GoToSocial/

Congrats to everyone who has worked incredibly hard to make that project a reality!

Show thread

Look at all the cool projects that NLNet is funding right now !!! 😮​

nlnet.nl/project/current.html

@lefractal

The only reasons I can think of:

1. tor is slow
2. VPS costs money
3. You have to place your TLS private key on the VPS (so you are giving your private key to VPS provider)

I did create an alpha version of a cloud service designed to do almost exactly this and make it as easy as possible to set up: greenhouse.server.garden/

Right now that project is on a bit of a hiatus / rethinking phase but AFAIK it still works and can almost be used in "production" .

I say "almost" because I think there are still some bugs around re-connection; in order to be truely production ready, the greenhouse-daemon service that you run on your server should be wrapped inside a health-check / auto-restarter.

The benefit of greenhouse: You don't have to pay for a VPS & you get even better data custody / security than a typical "lazy/naive reverse proxy over tunnel" setup.

The TLS will be terminated on your home server instead of on the VPS, so you get exclusive ownership of your TLS private key. Plus greenhouse "automagically" handles the `PROXY` protocol stuff for you so your HTTP server/app will see the proper remote IP of the connected client via the `X-Forwarded-For` HTTP header.

Because of the lingering bugs in greenhouse, I don't use it myself.

cyberia.club uses something similar to what you mentioned for our own services: wiki.cyberia.club/hypha/infras

Its the same thing as what you described, except instead of TOR it uses SSH. We don't have to be concerned about about having the TLS keys on the VPS because its our own VPS hosted on our own hardware (capsul.org)

Forest boosted

Can I get some voluntary load testing from #fediverse users lol, please boost/interact with this post.

Forest boosted

1. make the "link to post" (which is currently a bit too hidden as an un-indicated hyperlink under the timestamp ) more prominent

2. expand the search bar to cover the whole top of the page, perhaps even make the current URL show there by default... ? IDK maybe this is too much... But its gotta be a better solution than that awful "type in your full handle in order to take this action" popup window

Yeah I am realizing that over time, also that paste-url-into-search-field is a core usage pattern for fedi

You know what, I'm realizing that maybe the correct implementation for a fedi client has **a freakin <em>URL BAR</em>** at the top o' the page like its its own browser

@fack BTW how much ram does it use? Honestly considering running my own until gotosocial + UI options are looking better.

@fack I'm sure there's a way to do this.. I'm assuming you cant just shell in and slap your own theme file in there? Does fly let you run your own container? Maybe you can do a quick docker build with the container you use now in the FROM line and then just COPY or RUN whatever you need to get the custom CSS installed .

@auroris@artisan.chat The instance I am on puts a big ugly red CSS border around all non-alt-text'ed images on timelines. I really like that feature, I think it's something more instances / web apps should do.

It looks like there is a small guide on it posted by feditips here:
mstdn.social/@feditips/1078544

And the direct link to the css Gist on GitHub: gist.github.com/FiXato/3de505b

I created an account for my dogs to try out pixelfed: @fedipugs

So far its been a bit rough (no video support, hyperlinks get broken 🪦)

But if you like cute pet pictures maybe it could be worth a follow, I'll try to post when I can and occasionally maybe repost older stuff from the instagram that my partner made in the past.

Show older
Pixietown

Small server part of the pixie.town infrastructure. Registration is closed.