I hate how all AI hype is predicated on "if we can just make this not be broken then it would be an amazing product"

And because AI produced things look kinda close to the real deal people buy it. Cause it feels like it just needs a small improvement, even though its flaws are a fundamental part of the technology

Just don't draw the weird 6th finger. Just don't make up things when you don't have a real answer. Just don't change the environment in an AI generated game entirely if the player turns around 180 degrees

These things *feel* like they're small, solvable problems to people who don't know better. We could easily fix those things if humans were doing the work!

But AI can't. It will never be able to. It can't because not doing those things means it couldn't do anything else either. Like self-driving cars, the solution to these issues will always be 2 years out

@eniko Some of the trouble here is that slinging code (edit: for a significant, but _non-comprehensive_, set of applications) is one of the areas where the current generation of LLMs actually is a productivity boost. Combine that with the classic “well I’m smart and your field can’t be that hard” attitude and you get xkcd.com/1831/ but s/algorithms/ai/

Follow

@arubis @eniko But is it though? Because the last bits of research I've seen on this don't really support that notion, to say the least, and it seems to be mostly just Microsoft (and developers with no responsibility for the end product...) claiming that it is.

· · Web · 1 · 0 · 0

more detail, LLMs and productivity 

@arubis @eniko Sorry, I should've probably been a bit less concise/flippant there. Specifically the problem I see time and time again is that the people claiming it "makes them more productive" are developers who hold no responsibility for how well the end product works, and therefore are not factoring the externalized costs (and the costs of fixing the problems caused by it) into their assessment.

And on the other side of the wall that their code is being thrown over, there's the team leads, senior devs, etc. who are expected to fix (and in some way 'pay for') the results of insecure, unreliable, unmaintainable etc. code, who seem to pretty much universally feel that LLMs are *harming* the overall productivity of the team. Meanwhile there seems to be no objectively measurable evidence of LLMs increasing productivity, just a lot of marketing puff pieces.

So as far as I can tell these claims of 'productivity' are purely marketing, aimed at those who can afford to ignore the externalized costs.

Sign in to participate in the conversation
Pixietown

Small server part of the pixie.town infrastructure. Registration is closed.