Serious question: why would we want to create artificial intelligence?
And I mean the actual meaning of the term, not "a bunch of algorithms in a trenchcoat" and *definitely* not LLM grifts, but something that you could plausibly consider a form of genuine sentient life.
Why is this even a goal worth chasing? What does anyone hope to actually achieve with this?
@virtulis But we already *have* less squishy and less biased systems; computers can do that just fine, given careful application.
So why do we need "AI"?
Especially considering that trying to recreate sentience is more likely to *reintroduce* bias, because biases, however frustrating, do actually play a role in survival and development.
@joepie91 tangent: I think it's interesting to think in terms of probabilities disregarding time sometimes.
Like, objectively and realistically speaking, artificial sentience, faster-than-light travel/comms, cryosleep/cloning/etc all seem equally fantastic and unlikely to ever happen (let alone our lifetime).
But also objectively, one of these is not like the others, because sentience exists and is not magic, so the question is only of making some more of the same. So idk, I guess it's on the very fringe of something worth talking about sometimes?
But definitely not with those people.
@joepie91 well, exactly that. We could attempt engineering a "good" bias. I don't think we even need "laws of robotics", just basic empathy, which is way easier than sentience (see dogs).
By less squishy I imagine:
Not requiring oxygen or human energy sources, or otherwise more environment-resilient. On Earth easily solvable with remote control, but as latency grows some*one* better suited for it might be worth considering.
Serializable and/or modifiable. Again, I think engineering a "re-printable" sentience from scratch is probably easier than the magical instant human backup and cloning trope. So some*one* that can literally "unsee" things without years of therapy. Quite abuse-prone though.
Yeah I think that's about all I got.