IMO this is usability bug in mastodon or in the specific mastodon docker container you are using. A configuration issue of this magnitude should trigger massive highly visible error messages, it should trigger the app failing to start.
Querying the db showed:
```
# SELECT username FROM accounts WHERE id=-99;
username
----------------
localhost:3000
```
Which corresponded with the value in the curl
```
"preferredUsername": "localhost:3000",
```
So a quick fix was to connect to psql and run the following update: UPDATE accounts SET username = 'destituent.social' WHERE id=-99;
Now I can toot at my friends over on pixie.town!
Hopefully I didn't burn too many of your CPU cycles @f0x and @forestjohnson thanks!
streaming again today https://stream.sequentialread.com/
music: 🧪🏠 Goop House 🪴👽
dota2
live right now https://stream.sequentialread.com/
@firewally 👀 can I make an account for our dogs
@ashfurrow@masto.ashfurrow.com @nolan
I wish you the best and I hope that in the future you will have time for it, or whatever else you wanna do. I saw a lot of posts from mastodon.technology so thanks for hosting it :)
@nolan @shadowfacts Ok, reading more:
> While fibers provide a delightful interface to work with concurrency the real drawback of Ruby and other GIL (global interpreter lock) based languages is that the execution of threads is limited to only one native thread (per process) at a time.
It sounds like it's similar to Node.js, where one can make multiple-process copies of the application which collaborate.. Not unlike the signup table at a large event where you go to a different queue depending on the first letter of your last name, a-f, line 1 g-o line 2, and p-z line 3, something like that
@ashfurrow@masto.ashfurrow.com @nolan I still think that if this operation of pushing/federating posts to followers' servers was implemented in "lightweight threads / async io" all the way through, no limit on # of concurrent transactions, it would improve the throughput dramatically all on the same machine without a hardware upgrade. Especially if it can use lock-free techniques like partitioning. But also, I've never written a line of ruby nor cloned mastodon's code, so I have no idea how much of an undertaking that would really be.
@ashfurrow@masto.ashfurrow.com @nolan Ah nice, so maybe its the database driver that needs to work based on fibers instead of a fixed-size thread pool?
Or maybe a different way of doing the required transaction might free up the system from this tyrannical "12" limit.
@ariadne@treehouse.systems @zkat @matrix
Yeah, I have been burned by this.. Operating a matrix server is a lot of work and it my case it did ultimately come down to dealing with / cleaning up after abuse of the system.
We had some people on our server who know a lot about how matrix works and without them it probably would have had to have been completely reset. In fact I believe the server was re-created at least once early in its history before I got involved. I spent last weekend watching paint dry on database dump / restore & then messing it up and having to do it again.
But despite all of that I still like matrix, and we have lots of folks on our server who get to participate and don't have to fix the server or deal with the "bad communities" problem themselves. I'm hoping that over time it will improve. A little usability improvement / improved functionality on the matrix admin tools would go a long way I think.
Sorry about my super long winded, poorly written posts before, the sort of silly analogy I can make goes like this:
In the nation of Mastodon Server #42069, there is a postal service. All outgoing mail goes to the postal outbox. Postal workers drive to the outbox, pick up One (1) message, then read the URL address on it and drive to that server to deliver it. Then they drive back to the outbox & repeat. There are only 12 postal workers. Problem is that when folks follower counts start rising they start getting followers from thousands of different servers, and every time they do anything, it puts thousands of messages in that outbox. Poor 12 workers can't deliver them all 1 at a time even if they were superhuman HTTP client machines.
My opinion: remove the limit on # of postal workers completely, or, if you absolutely can't do that, then try to make more than 1 message per mail truck.
But the "more than 1 message per mail truck" sounds like its not in the protocol. So I think fibers is the only way for mastodon.
Sorry about my super long winded, poorly written posts before, the sort of silly analogy I can make goes like this:
In the nation of Mastodon Server #42069, there is a postal service. All outgoing mail goes to the postal outbox. Postal workers drive to the outbox, pick up One (1) message, then read the URL address on it and drive to that server to deliver it. Then they drive back to the outbox & repeat. There are only 12 postal workers. Problem is that when folks follower counts start rising they start getting followers from thousands of different servers, and every time they do anything, it puts thousands of messages in that outbox. Poor 12 workers can't deliver them all 1 at a time even if they were superhuman HTTP client machines.
My opinion: remove the limit on # of postal workers completely, or, if you absolutely can't do that, then try to make more than 1 message per mail truck.
I am a web technologist who is interested in supporting and building enjoyable ways for individuals, organizations, and communities to set up and maintain their own server infrastructure, including the hardware part.
I am currently working full time as an SRE 😫, but I am also heavily involved with Cyberia Computer Club and Layer Zero