Show newer

@ljwrites

Yeah I feel similar about YunoHost. My two biggest wishes for a system like YunoHost are

1. Built to support replication/failover
2. Built to support multiple users

By "support multiple users" I mean similar to how Mastodon/Matrix servers do the "1 admin per ~100 users" model.

So for example I can share my server with my friend, create an account for them, and then they can get their feet wet and try out hosting something themselves without expending too much effort.

But at the same time, since it supports replication & failover, there's a reasonable path to those "experiments" becoming well loved and frequented destinations with reliability / longevity. When one admin falls (loses interest) another can rise to take their place without much fuss.

So I think that's what I'll work on next :)

@nolan i guess you could say you've been... thinkin about thos beans?? ?

Forest boosted

I need every infosec person to understand that surveillance capitalism is structural, not individual, and we are not going to ethically-consume our way out of it please and thank

@ljwrites

> surveillance capitalism is structural, not individual

> Harassing people [...] does Not make anyone safer or more secure.

💯 More people need to hear this.

I would like to also offer a bit of my own rant and optimistic take on how the structural/systemic issues at hand here can be addressed.

IMO a lot of the "structure" at work here comes from economic forces that poured endless investment cash into research & effort on how to make client software and webapps usable by everyone.

Meanwhile the usability of the server applications / web infrastructure stuff is still stuck in the 80/90s for the most part.

I think tech folks with the resources and time can (and should!!) strike at the root of the problem. To me that mostly means trying to improve the usability of server software and make it more accessible to more people.

I don't mean everyone should run a server.

But as servers become more and more like web browsers (they "just work" on the first try and don't break when they update themselves automatically) it will become more and more likely that everyone will know someone, or a friend of a friend in their community who _does_ run a server.

I liked the "TL;DR" from homebrewserver.club:

> Take the ‘home’ in homebrew literally and the ‘self’ in self-hosting figuratively

> That means we try to host from our homes rather than from data centres - a.k.a. ‘the cloud’ - and we try to host for and with our communities rather than just for ourselves.

I think the fediverse software and similar networks have sorta succeeded in that regard despite continued rampant usability problems on the server/admin side. Its encouraging to me that something like mastodon which is far from perfect can still gain traction and continues to attract new users and inspire new projects.

Basically I want to be a home server evangelist but if the thing I would be evangelizing still costs money, takes time to set up, and still fails 99% of the time, what's the point?

Just need to get the software / systems to a point where they don't annoy ppl much, they can be easily shared with friends, and they fulfill a need. For example they provide a sense of data custody and belonging within a local community, something folks'll never get from Google, Facebook or AWS.

Yes, its a tall order, its insanely hard / no one knows if this is even possible. But I feel like I would be doing myself and everyone else a disservice if I didn't try.

It's been many months since I've really seriously worked on any of my projects, in the mean time I had some fairly major life upheavals (getting covid, quitting drinking, starting therapy)

But lately I've finally been slowly getting back into it & reorganizing my thoughts. I think I do want to keep working on creating my own homebrew server oriented software project, but I think I'm going to start over at this point. Greenhouse was a bit of a failure and I think I need to completely redesign.

sequentialread.com/greenhouse-

Minneapolis folks, I am planning on hosting a free workshop on how to to make a website from scratch. (HTML and CSS)

The workshop is meant to be for folks who have never done it before, but experts are welcome too!

Check out more details and mark times you would be available @ framadate.org/ABPiTpWWEzqNWmo6

Also, plz boost if you are in the area. Thanks 🧡

Forest boosted

Today, the Bonfire team is excited to announce our beta release 🔥🎉

We’re aware that Bonfire still needs a lot of work - like ensuring it federates as expected and improving configurability, accessibility and user experience - but that’s the point: we decided to launch at this stage with the intention of building the 1.0 release as a community.

🗞️ Blog post: bonfirenetworks.org/posts/meet

🌈 Signup on the playground instance:
playground.bonfire.cafe

@charlotte @thufie

The bloat is just legacy.. Its alll legacy. its just like email. Well. err. Thank god its not as bad as email 😅

@charlotte @thufie

:1000: I feel like way too often ppl conflate

"the big heavy bloat of the web / can't make a new web browser"

with

"the corporate takeover of the web / panopticon platform centralization"

They are totally not the same thing!

In my opinion the last ghost of a chance of a decentralized internet depends heavily on some of the "worst offender" bloat features in web browsers, like ServiceWorker, the local caches and databases, and connectivity enablers like WebSocket / WebRTC

Reason being that if the majority of browsers support these, it could make self-hosting and community hosting web platforms and content way easier

I still think what I wrote there, my thesis, still holds up 10 yrs later:

> If your app doesn't have a URL, who's going to use it?

Show thread

@f0x @seedlingattempt@kolektiva.social @benx@kolektiva.social

My memory must be messed up -- I obviously don't remember what it was, I just remember that it was an impressive amount from my PoV. My server was serving up less than 100GB/mo. But i dont serve any social media stuff.

I was planning on charging $0.01 per GB for greenhouse (DigitalOcean prices) but it would have made your situation cost-prohibitive. I also didn't know about the cheap bandwidth on hetzner.

I would love to be able to make greenhouse into a bargain basement "efficient market" for bandwidth if possible, still working on figuring out what that would look like. But I can definitely do better than $0.01/GB. With hetzner prices its about $0.50/TB, 20x cheaper.

@seedlingattempt@kolektiva.social @benx@kolektiva.social

Yeah I don't actually know that much about the market for bandwidth at the commercial scale, I just know that its the most common thing that you get nickle-and-dimed for on the public cloud. It could be a little bit of a "gentlemans agreement" among the public clouds that they all get to overcharge for it.

But I think some of them don't, like hetzner for example offers 20x cheaper bandwidth compared to DigitalOcean.

In terms of real numbers, @f0x the owner of the mastodon server I use plus an active matrix server and some other stuff, saw something like 80TB of bandwidth in a month if I remember correctly. That's 4x the amount you get included with hetzner's $5/mo VPS

@seedlingattempt@kolektiva.social @benx@kolektiva.social

Honestly I would worry more about legal issues and regulations making it harder for the little guy to get access to public clouds. That's the only thing I can think of that would force people into livingroom servers.

But IMO it makes no sense for the government to do something like that, if they already have their 3rd party doctrine and everyone and their brother is happily occupying the public cloud panopticons, why upset the apple cart? Gov't would lose an incredibly powerful surveillance weapon by kicking the grassroots out of the public cloud.

@seedlingattempt@kolektiva.social @benx@kolektiva.social

@seedlingattempt@kolektiva.social I don't know... I don't think that a lot of the stuff you are talking about can come to pass.

Yes these hosting providers are driven by greed... But as far as we know, there's no monopoly, there's no syndicate. They **compete** with each-other.

Also, keep in mind that in many ways public clouds are sort of like a utility. Like water or electricity. The stuff they sell is fungible & you can purchase different amounts of it, generally the price per unit stays the same-ish for a given provider. In fact I would say it gets CHEAPER per unit as you buy in bulk, not more expensive.

IMO, this competition in a market for a water-like commodity means that we'll always be able to buy some if we want. The price isn't going to skyrocket. I don't see either supply drying up or demand exploding any time soon.

---

I worked in the enterprise software world for 5 years, for the last 1.5 years of that I worked as a DevOps specialist / SRE for a company that spent almost a million dollars a year on AWS EC2 instances and similar...

I'm extremely familiar with scaling software, the type of problems that come up at scale, and how that translates to the economics of the situation. In my opinion you are missing the most important aspect of the scale question:

**COMPUTERS ARE LIKE, EXTREMELY EXTREMELY FAST**

A computer can easily do a million things per second without breaking a sweat. Yes, even over a network, yes, even with on-disk persistence and each event being validated.

Computer science teaches us to ignore the "constant factors" (each event taking 5 microseconds to process versus each event taking 500 microseconds to process) and instead place a laser-like focus on the __Growth Rates__ of the CPU time and memory requirements as the scale of the problem grows.

In my experience at work, both things end up mattering, but if you don't get the growth rate stuff under control first, any optimizations that can be made won't change the overall picture much.

The problem here is that all too often, the "growth rate" of the CPU time, etc, the "Big O Notation" of your program or network, doesn't depend on what language its written in, it doesn't depend on what hardware it runs on or how fancy the network is. It purely depends on the DESIGN. The interface design. API design. How the parts fit together and move together.

All too often, software is designed quite well for one thing but ends up being used completely differently -- or it's just designed poorly. There is not always an upgrade path from poor design, especially with a networked community of servers like mastodon / ActivityPub. I don't know enough about ActivityPub myself to comment on how its design affects its ability to scale, but I do feel confident to say:

The API design of ANY software will affect its ability to scale 100x more than any economic issues like datacenter costs.

Those Enterprise cloud customers that pay $1M/year to AWS aren't paying that much "just because"... they are paying that much because its cheaper than trying to re-architect their system with a better design. They probably pay over $10M/year in salaries and benefits... It's simply a lot cheaper to hire a few hundred virtual machines to run inefficient code than it is to hire a team of professionals to figure out an upgrade path away from said inefficient code.

New ActivityPub servers like GotoSocial promise to explore the limits of ActivityPub optimization -- if ActivityPub's design allows for it, I predict that once properly optimized, a GotoSocial instance will be able to handle hundreds of users and thousands of federated connections WITHOUT needing a hardware upgrade. I predict Bandwidth will actually be more expensive than the computation side of things!!

That's just my 2c.

Show older
Pixietown

Small server part of the pixie.town infrastructure. Registration is closed.