Show newer

@ashfurrow@masto.ashfurrow.com @nolan I still think that if this operation of pushing/federating posts to followers' servers was implemented in "lightweight threads / async io" all the way through, no limit on # of concurrent transactions, it would improve the throughput dramatically all on the same machine without a hardware upgrade. Especially if it can use lock-free techniques like partitioning. But also, I've never written a line of ruby nor cloned mastodon's code, so I have no idea how much of an undertaking that would really be.

@ashfurrow@masto.ashfurrow.com @nolan Ah nice, so maybe its the database driver that needs to work based on fibers instead of a fixed-size thread pool?

Or maybe a different way of doing the required transaction might free up the system from this tyrannical "12" limit.

@ariadne@treehouse.systems @zkat @matrix

Yeah, I have been burned by this.. Operating a matrix server is a lot of work and it my case it did ultimately come down to dealing with / cleaning up after abuse of the system.

We had some people on our server who know a lot about how matrix works and without them it probably would have had to have been completely reset. In fact I believe the server was re-created at least once early in its history before I got involved. I spent last weekend watching paint dry on database dump / restore & then messing it up and having to do it again.

But despite all of that I still like matrix, and we have lots of folks on our server who get to participate and don't have to fix the server or deal with the "bad communities" problem themselves. I'm hoping that over time it will improve. A little usability improvement / improved functionality on the matrix admin tools would go a long way I think.

Forest boosted

@nolan @shadowfacts

Sorry about my super long winded, poorly written posts before, the sort of silly analogy I can make goes like this:

In the nation of Mastodon Server #42069, there is a postal service. All outgoing mail goes to the postal outbox. Postal workers drive to the outbox, pick up One (1) message, then read the URL address on it and drive to that server to deliver it. Then they drive back to the outbox & repeat. There are only 12 postal workers. Problem is that when folks follower counts start rising they start getting followers from thousands of different servers, and every time they do anything, it puts thousands of messages in that outbox. Poor 12 workers can't deliver them all 1 at a time even if they were superhuman HTTP client machines.

My opinion: remove the limit on # of postal workers completely, or, if you absolutely can't do that, then try to make more than 1 message per mail truck.

@nolan @shadowfacts

But the "more than 1 message per mail truck" sounds like its not in the protocol. So I think fibers is the only way for mastodon.

@nolan @shadowfacts

Sorry about my super long winded, poorly written posts before, the sort of silly analogy I can make goes like this:

In the nation of Mastodon Server #42069, there is a postal service. All outgoing mail goes to the postal outbox. Postal workers drive to the outbox, pick up One (1) message, then read the URL address on it and drive to that server to deliver it. Then they drive back to the outbox & repeat. There are only 12 postal workers. Problem is that when folks follower counts start rising they start getting followers from thousands of different servers, and every time they do anything, it puts thousands of messages in that outbox. Poor 12 workers can't deliver them all 1 at a time even if they were superhuman HTTP client machines.

My opinion: remove the limit on # of postal workers completely, or, if you absolutely can't do that, then try to make more than 1 message per mail truck.

@nolan @shadowfacts

Honestly I shouldn't have even mentioned the batching concept, 1 request per event is fine.

The main thing is getting that 12 concurrent requests number up, way way up, until its not the limiting factor any more. I am reading about this right now 👀 blog.saeloun.com/2022/03/01/ru

Maybe mastodon can use that with its sidekiq, instead of threads, and that fixes it ?

@f0x Well it could be tho.. there's no reason why the keys in IndexedDB can't have the timestamp as the 1st part of the key.. Then different timelines are just different filters on the same sorted set 🤔

I predict that you'll end up de-normalizing the data at least a little bit or at very least having multiple "indexes" within the indexedDB in order to look things up in various different ways.. But I could be wrong lol, Ive never tried to make a fedi client

@f0x scrollbar on web page ~= indexed DB iterator ?? 🤓

Forest boosted
@nolan AIUI, it's basically an architectural issue with Sidekiq. Jobs run synchronously, and one worker thread runs one job, so an in-flight network request blocks an entire worker thread. So you have to put a lot more work into tuning Sidekiq to fully utilize the CPU/network resources.

I strongly suspect anything with green threads (Go, Elixir, etc.) would manage it better, because you can just kick off a green thread for every job and when one suspends while waiting for a network request, the OS thread can just switch to running a different job—so there's always forward progress.

I don't know if those languages have facilities for tuning the runtime based on network bandwidth, but they would at least get you a lot closer to CPU saturation.

@nolan

> The problem is that the code is using 90s concurrency technology

Think oldschool apache 1.0 or even older, inetd, versus a modern application like nginx.

The former uses 1 operating system thread per request, while the latter uses epoll / asychronous io / event loops to handle thousands of concurrent requests all on the same OS thread.

This might be a bit of an inflammatory statement but really, Literally ALL the high-scale "fast" web tech uses the multiple-concurrent-things-on-same-thread design. So the fediverse software just has to adopt that if its going to "scale".

Again, It's not a matter of buying a faster computer or using more energy to run the service, it's just a way of having your computer do 1000s of things at once (or even millions depending on the system) without the computer getting bogged down switching between tasks like it does with OS threads.

----------------

Hugo Gameiro of masto.host goes on:

> Then, for each reply you receive to that post, 3K jobs are created, so your followers can see that reply without leaving their server or looking at your profile. Then you reply to the reply you got, another 3K jobs are created and so on.

Another thing I'm noticing here: Really there should be 1 queue for each server that you federate with, or the queue should be somehow "partitioned" on the remote server URL. Instead of spawning a new task for each individual message to a server, maybe it should just build up the events destined for a specific server & then send them all at once in a batch. During normal operation when its not backed up, the batches will only have 1 message, but when it gets backed up, batching could help dramatically. If there are 100 events queued up for the average server, then batching would make the consumption of the queue 100 times faster. IDK if ActivityPub supports this kind of batching tho.

At any rate I've never used Ruby, I could be wrong about a lot of the stuff im saying about OS threads etc with how that relates to Ruby code, but at the end of the day, that 12 number is sticking out like a sore thumb. I'm very very confident that the "actual solution" will be to remove the ceiling on the # of concurrent operations. We have the technology; it's just software. No need to buy a new server.

@nolan

So, I have never used Sidekiq before but Queues of various kinds, distributed systems and their performance issues used to be my bread and butter at my old job.

From the aral post, I think the most important part is to listen to what the server admin is saying about how the problem occurs:

> (you have 23k followers, let’s assume 3k different servers), as soon as you create the post 3k Sidekiq jobs are created. At your current plan you have 12 Sidekiq threads, so to process 3k jobs it will take a while because it can only deal with 12 at a time.

Right away I am already seeing architectural red flags here. If 12 different other servers that I am trying to push updates to are all currently overwhelmed and timing out, it sounds like the entire process will grind to a halt. But there's reason for that; it's not the server who has a lot of push work to do can't do it all in time!!

The problem is that the logic that the server uses to push updates is highly flawed. First of all, an arbitrary limit of 12 in-flight requests max is pretty damn low. I would adjust that up to 1000 or higher. A raspberry pi can handle 1000s of concurrent http requests no problem. HOWEVER I suspect that "worker count" of 12 or whatever it is, is ridiculously low for some reason -- It sounds like each worker is its own OS thread, so spawning 1000s of them could do terrible things to the poor CPU, cause it to spend way too much time context-switching between OS threads.

This sounds like oldschool 1-thread-per-request concurrency. Still works fine in 2022 as long as the # of things happening at once is close to the # of processor cores you have. But when you start having (or wanting to have, for performance reasons) 1000s of things going on at once, you start looking at implementing asynchronous IO properly.

couroutines, goroutines, greenlets, async tasks, lightweight threads, fibers, actors, event loop, epoll, async io, they go by many different names depending on the language du jour. But they are all ways of having 1 single OS thread doing multiple things at once without triggering OS-level context switches.

Key takeaways so far:

--------------------------------------

The problem is that the code is using 90s concurrency technology

Buying a faster computer wont really fix the problem, although it might ameliorate it a bit

Properly fixing the problem involves refactoring the code so that it can do 1000s of things at once.

Note that doing 1000 things at once is totally normal for modern applications and it does not require a fast computer.

w3.org/TR/webtransport/#certif

Finally, after 10,000 years I'm able to open a real socket to anywhere I want, from JS in a web browser! Time to Conquer Earth!

Forest boosted

guide for running GoToSocial behind cloudflare 

don’t :)

@fuuma

ameridroid.com/products/odroid

System Memory – DDR4 .... supporting up to 64GB RAM in total

Yeah should be good :pacman:

How UX apathy leads to corporate capture 

@Literally @joepie91 my point is, it only takes 1 person who is capable of giving up their free time in order to get yelled at.
People do the darndest things.

Honestly, I wish more people would yell at **me** lol

How UX apathy leads to corporate capture 

@joepie91

I think the relationship goes the other way around. UX is seemingly perpetually "captured" by big corporations because they are the ones who created it and have big budgets dedicated to feeding it and keeping it alive.

But I think the good news is that due to the zero-marginal-cost economics of software, a little investment into usability for FOSS can go a long way. I think that right now, independent software and usability people who quit their jobs or maintain hobby work on the side while they remain in the labor market, they are producing a lot of great software, and I do see a trend over the years of more and more "permacomputing" type projects popping up, things like Gogs -> Gitea, Yunohost, Owncast, GotoSocial, etc. All those projects have a great focus on usability, and I'm hoping that trend continues. I'm hoping that as more and more people exit the tech industry, hopefully they didn't blow all their top tier salary money & they can afford to bring their expertise and passion to the free + open + usable + community software on a pro-bono basis.

Forest boosted

If your program does this, it's bad and you should feel bad:

Unknown option -h. Use --help for usage information.

Show older
Pixietown

Small server part of the pixie.town infrastructure. Registration is closed.