Follow

If you look at LLMs - a technology created through exploitation, for the explicit purpose *of* exploitation - and you conclude "well there might be legitimate uses", then your judgment is frankly garbage and you're looking for excuses not to have to take a real position

Even VLC' automatic subtitle generation and translation?

@nico The point here is about the attitude towards the category, rather than about any specific applications

But this one specific application break your conclusion.
It exists one legitimate use and it wasn't built for exploitation.

We can talk about its impact on professional subtitling versus how many video would never be subtitled by an human.
But that's another debate...

@nico No, it does not remotely 'break my conclusion'. The technology was not created for VLC, and so it is irrelevant to my point.

@nico Like, I want to be very clear about this: I *do not care* if people manage to find nominally legitimate uses, and I have zero interest in arguing about exactly how legitimate they are.

It changes exactly nothing about how the technology was created, for what purpose, and how that has influenced its design choices and externalized effects. And *that* is the problem here.

@nico And the same holds here as what I said in my initial post: if you choose to focus on a handful of nominally legitimate uses, instead of the (intentional!) systemic dangers and harms perpetuated by the technology as a category, then you are not having a legitimate discussion - you are just looking for an excuse not to have to take a real position on the matter.

And if you focus only on the harms of a technology, you get law that, in effect, recognize blunt butter knife as an offensive weapon. I am not kidding, it was a case in UK in 2005.

@nico And no part of my original post was about laws.

I really have no interest in chasing goalposts. If you're not going to engage with the actual point I'm making, then please don't engage at all, because I certainly am not going to go off on a million irrelevant tangents.

I've spent the better part of a decade chasing these sorts of 'discussions' about other hyped technologies and none of them have *ever* resulted in any kind of useful outcome. I do not intend to waste any more time on this.

@nico In the current situation, there is exactly *one* valid discussion to have about LLMs: and that is one that recognizes that it is a fundamentally exploitative technology (not in the least due to its training data demands that are impossible to meet ethically), and where the discussion revolves around how to most effectively remove it from society.

Any other kind of discussion - and that *especially* includes "devil's advocate" type arguments - only serves one purpose, and that's to provide cover to the fascists running the show. And I will not engage in that. Its exploitative nature is not up for debate.

You want real position, international treaty and laws are how you do that. So that's why I was speaking about that.

Permit me a little tangent: the recent withdrawals from the Ottawa treaty.
You would say that anti-personnel landmines have no legitimate use, no? They have know harmful effects on the civilian population.

Straw man fallacy? Maybe but I see some serious parallels there and I don't like that.

@nico

> You want real position, international treaty and laws are how you do that.

No, it is not.

What you're doing here is historical revisionism and it is bad even when used to denounce abuse.
The technology was created by academics and industrial research. The foundational paper "Attention Is All You Need" was even about translation.

Then, of course, OpenAI happened.
Sign in to participate in the conversation
Pixietown

Small server part of the pixie.town infrastructure. Registration is closed.