The problem with arguing against LLMs like ChatGPT purely on the basis that they "don't work", is that it might *seem* like an easier argument to convince people, but in practice you've just specified the threshold of workingness at which exploitation is acceptable
Like, yeah, it's true that the tech doesn't really work. But that's... really not the main problem with it? You're actually going to have to engage on the topic of exploitation to get the point across, there are no shortcuts here