Show newer

tobacco industry, harm reduction 

Sigh, I guess I should've expected corporations to try and coopt "harm reduction"...

To be clear about this, "harm reduction" is what you do when you cannot solve the situation fully because of circumstances or (more frequently) an oppressive system that you do not control, and so the best you can do is mitigate the impact.

It is NOT an acceptable thing to do or argue when you *are* the ones in power and responsible for the problem existing in the first place, like tobacco companies are trying to do! The only valid approach in that situation is to stop harming people entirely.

If you live in the USA and have been having trouble getting your #ADHD meds, please consider filling this out.

It appears the government is going to be meeting with FDA about this and want people to comment.
regulations.gov/commenton/FTC-

Edit: my wife said this is not just for ADHD meds but for anybody dealing with medication shortages recently.

why AI is not impressive 

I genuinely believe that the "technological advancements" of modern AI, i.e. Advanced Interpolation, are not impressive. like, a lot of people will say that "you gotta give it to them" because the technology is impressive, but it's not, and here's why.

note that I'm going to be heavily simplifying the mathematics of AI in a way that is close enough for explanation, but not entirely accurate. don't @ me about it.

modern AI accumulates the sum total of all human knowledge into several machine models, using the most advanced interpolation methods mathematics has available, to create something that barely works.

think of it this way: the reason why GPT and co. are treated like search engines is because search engines are really the predecessor to these models. Google and co. have done an incredible amount of work representing data in a generic way such that it can be looked up. however, there are a few key differences between Advanced Interpolation and search engines:

AI has access to private data that search engines will not show you. these models are trained on the private information of thousands of people, given unknowingly to companies who have been scratching their heads trying to monetise it. they include texts that must be purchased, images and videos that were never given to the public, etc. the reason why they're able to make a pretty good impression of what a person looks like is because they have millions of photos of people that were never published online, but were scooped off of people's phones anyway
for information that is untagged, AI leverages exploited labour to tag it instead. for example, you might not say what's in your photo album, but an underpaid, exploited worker can easily tag it as a person, maybe even what kind of person, after a second's glance
AI does not keep a copy of the original work. what I mean by this is that AI is no longer trying to find the specific reason why your terms match the result, and claims to give a result to your query. it does this by combining existing information in the best ways it can

and I say that these are the "most advanced" interpolation methods, but this is giving it a lot of credit. the mathematics behind AI, although obscured behind weird metaphors for biology ("neural networks," etc.) isn't particularly new. the new part is the absolute scale of the data involved. humans can only visualise data in three dimensions, not even four, but data inside an AI model might be anywhere from thousands to millions of dimensions. even if you just draw a line between all the points on a graph, a million-dimensional line will still be incomprehensible to a human, and might look genuinely impressive. except it's not just a line, but a polynomial in several orders of magnitude, maybe ten to a hundred. the mathematics of that aren't anything new, and the only new thing is choosing to allocate so much energy and resources into something so ridiculous

like, in addition to it matching the acronym, I call AI Advanced Interpolation because it's only that: connecting between points instead of going beyond them. it's been demonstrated that existing AI models are absolutely horrendous when asked to stray even slightly beyond their input parameters, and something you might not realise is that their input parameters are poorly defined.

like, the entirety of the internet has just been treated as training data for these models. we can convert all the text, images, audio, and video on the internet into these million-dimensional points and draw a hundredth-order polynomial between them, and you'll get pretty good results if you ask about the definition of a word, or ask to draw a picture of something people take pictures of every day.

but the like all interpolation methods, these methods fail ridiculously poorly when they're set beyond their initial boundaries. one example (which is only partially mitigated by the way AI works) is Runge's phenomenon, where adding more dimensions to data might make the quality worse, and even if you improve the interpolation between points, extrapolation is still pretty bad.

and like, sure, we do a lot of standard things every day, but there are also a lot of non-standard things we do. as an example, rearranging a deck of 52 cards, 52-factorial ways, means that even if everyone on earth shuffled a deck every morning, it would still be effectively impossible for anyone to get the same shuffle before the universe ends.

so, take all combinations of words and account for some grammatical similarities, and it's likely that several things said every single day have never been said before, ever. there's also a reason why every piece of art is unique; sure, there are plenty of similarities, but many pieces of art are made every day that have never been made before. all of these can be interpolated between the things the model has seen before, but the moment you add just a few too many things that it hasn't seen very often, it messes up. it can't even draw hands, for fuck's sake.

like, and what did it take to get here? an amount of CPU and GPU time far greater than that spent on basically anything useful in the entire history of computers' existence, for sure. (CPU time meaning time multiplied by the number of CPUs doing things; having two computers do the same thing at once means double the CPU time per actual time)

like, I'm not impressed. you've collected the sum total of human knowledge, exploited a bunch of people, guzzled water and other precious resources at a time when the planet is in grave danger, and for what? something that barely works? fuck you. abandon your shitty project and do some actual research for once.

Show thread

I have noticed that privileged tech dudes are overwhelmingly overrepresented in the group of "people who are fine with the threads implementation as it exists in Matrix/Element today" (and most everyone else seems to hate it and refuses to use it) and I'm not sure what to do with that observation

CW-boost: healthcare system comparison, superficial description of medical procedures 

Dear instance admins,

Please set your account sign-ups to manual review. Your instance doesn’t need to grow at any cost.

Remember: Quality, not quantity.

#fediverse #mastodon #spam #resiliency

a mastodon instance for delightfully tooting about cats

I want to deploy my servers on a new architecture, but it feels like a bit of a risc

Kennen jullie die lijst van leugens van Trump?

Dat hebben we ook nodig, maar dan voor VVD'ers.

Oh we’re doing “your admins can see your DMs” again?

You’ll never guess what’s true about literally every website that has a DM feature

Is there a term (English or not) for when people make something work for them in a way that is busted as hell but charmingly effective and doesnt need to be improved? Im thinking of dragging windows around in a screen recording frame as a way of editing videos rn, but also stuff like idk using a keychain ring on a pants zipper to loop around a button and keep it up, etc. The emphasis is more on the unexpectedness or laterality of approaches than them being "bad."

"Hack" or "kludge" implies too much jankiness and is usually negative, "macgyvering" or "Jerry rigging" is too specific to using only what is on hand to do something, there is a Brazilian word "gambiarra" that I think is quite close but I don't have the cultural context to tell.

Edit: I think gambiarra is what im looking for here, thanks @catzilla !

tumblr but also more general, grump 

Very very tired of people's first response being to try and argue that such-and-such isn't transphobia/racism/whatever instead of taking a step back and thinking or asking about why people feel that it *is*

"Some people try to make everything political" is a political stance

: weet iemand een winkel (in NL of met betaalbare verzending naar NL) die allerhande pride-themed dingen zoals kleding verkoopt? En dan graag eentje die ook echt onderdeel van de queer community is, niet e.o.a. handige jongen die even snel wat winst wil maken

"Technology will not solve social problems" is true, but so is "social solutions to social problems may need certain technology to make them viable", and both are important or otherwise you'll just end up with a dogmatic anti-technology stance

It feels like not long ago that everyone was talking about Tears of the Kingdom.

As an extension of Breath of the Wild's open-world Zelda model, TotK had a lot of interesting ideas, a fascinating physics system, and generally struck a chord with a wide audience. But the more I played it, the more disillusioned I became at its narrative, especially in its use - and attempted subversion - of tropes.

So here's an unlisted thread on careful application of harmful tropes in videogame narratives:

: does anyone know of a trustworthy Firefox extension for one-click-zapping various dialog walls (paywalls, registration walls, etc.) where the content is still visible below, but the view is obscured and the scroll is blocked?

(Element zapper in uBlock is not sufficient; it requires too many clicks and does not deal with scroll blocking)

a general piece of advice, for fedi and elsewhere 

Even if you're marginalized, you are probably privileged among *some* axis. And sometimes, or even often, the right decision is to just step back and *absorb* things. You don't need to have an opinion about everything. It's fine to excuse yourself from a discussion.

Follow some people who are marginalized or living a different kind of life in a way you are not familiar with. Quietly. Not commenting on their experiences or giving advice, just *absorbing* them. For months, if needed.

I promise that you'll learn things you'd never considered before, and the only thing you have to do is listen quietly, and stop yourself from responding to them. Nobody will expect you to opine on them just because you've read them.

Show older
Pixietown

Small server part of the pixie.town infrastructure. Registration is closed.