I hate how all AI hype is predicated on "if we can just make this not be broken then it would be an amazing product"
And because AI produced things look kinda close to the real deal people buy it. Cause it feels like it just needs a small improvement, even though its flaws are a fundamental part of the technology
Just don't draw the weird 6th finger. Just don't make up things when you don't have a real answer. Just don't change the environment in an AI generated game entirely if the player turns around 180 degrees
These things *feel* like they're small, solvable problems to people who don't know better. We could easily fix those things if humans were doing the work!
But AI can't. It will never be able to. It can't because not doing those things means it couldn't do anything else either. Like self-driving cars, the solution to these issues will always be 2 years out
more detail, LLMs and productivity
@arubis @eniko Sorry, I should've probably been a bit less concise/flippant there. Specifically the problem I see time and time again is that the people claiming it "makes them more productive" are developers who hold no responsibility for how well the end product works, and therefore are not factoring the externalized costs (and the costs of fixing the problems caused by it) into their assessment.
And on the other side of the wall that their code is being thrown over, there's the team leads, senior devs, etc. who are expected to fix (and in some way 'pay for') the results of insecure, unreliable, unmaintainable etc. code, who seem to pretty much universally feel that LLMs are *harming* the overall productivity of the team. Meanwhile there seems to be no objectively measurable evidence of LLMs increasing productivity, just a lot of marketing puff pieces.
So as far as I can tell these claims of 'productivity' are purely marketing, aimed at those who can afford to ignore the externalized costs.