like when people say "don't dump out ML with the AI bathwater"

sorry, but I am

neural networks are an overhyped misunderstanding of function interpolation developed at a time when we were first learning how brains worked, continuing to be used even when we know brains don't work that way

even if ML can do the job well, there's a good chance that training networks is substantially less efficient than computing them with other methods, and said other methods can potentially create better forms of the end result that cannot be computed via a traditional neural network

like, here's a good example: meta still has the ability to recognise faces with 99% accuracy, way better than any human can. why? because facial recognition tech was developed long before we had the computing resources to waste on the bullshit we have today, where we just funnel all the pixels into a neural network and hope for the best. it actually uses the structure of human faces to create uniquely recognisable "faceprints" and those are what they compare with their database

even if the recognition of said "faceprints" is done via a neural network, it's so many orders of magnitude less data that we just don't care.

turns out that actually solving problems is better than not solving them!

Follow

@clarfonthey I continually think about a research project I saw that used an LLM to give directions for car navigation, which had to introduce a theorem prover to actually verify the LLM output was valid.

Nothing of value added, required importing a large tool to make up for the obvious pitfalls. Only appeals to people who value the technology for name recognition alone.

@thufie we've packaged a lie detector with our lying machine

Sign in to participate in the conversation
Pixietown

Small server part of the pixie.town infrastructure. Registration is closed.