@landelare @eniko The problem with that type of reasoning is that LLMs have not fundamentally changed anything about this equation; they are not a 'step in the right direction' or whatever.
A sudden breakthrough has *always* been possible, for as long as there has been technology. It does not necessarily become more or less likely as other technologies come to exist; those other technologies only affect the likelihood if they can be built upon.
And that's the problem with LLMs: it's a dead-end technology. There is no path towards improvement, because it has always been entirely smoke and mirrors, there was never any legitimate technical advancement (towards anything people actually care about) underlying it.
In other words: a sudden breakthrough is exactly as likely today as it was 15 years ago, because LLMs never actually advanced the state of the art; they just created an effective *illusion* of doing so. But you can't build on illusions.