Follow

Serious question: why would we want to create artificial intelligence?

And I mean the actual meaning of the term, not "a bunch of algorithms in a trenchcoat" and *definitely* not LLM grifts, but something that you could plausibly consider a form of genuine sentient life.

Why is this even a goal worth chasing? What does anyone hope to actually achieve with this?

· · Web · 7 · 5 · 12

@joepie91 I am going to make a talk "AI is not real" and this is going to be a topic in it

@joepie91 "AI" was always supposed to automate away the tedious stuff.

What we are doing now is automating away art, as well as putting folks out of their jobs before realizing we still needed them (and putting them in misery because not having job under capitalism = pretty fucking bad because no income leads to poverty eventually)

@joepie91 I think there are two different dreams behind these concepts:

(1) reaching some sort of godhood progress in the humankind by being able to engineer a copy of what people usually believe to be a distinguishing factor of the human species and ushering in that era

(2) building something that seems like the evolution of humankind on some aspects

@raito The thing is, I don't see why point 1 would be in any way desirable, and point 2 doesn't seem to be what people actually are doing - instead they're trying to 1:1 emulate how human brains work, warts and all.

@joepie91 I don't think it's desirable, but rationality did not prevent certain undesirable things to be pursued in the history of humankind, right?

as for point 2, the overall design started by doing biomimicry and all that stuff because human has some insights that are not well understood and mimicking is good for obtaining the results without understanding the mechanisms, belief is "more progress" will give cheaper human-like thinking

@joepie91 (sorry, should have had that the last paragraph is not my opinion but my understanding of what people believes)

@raito This seems like it just circles back to "labour exploitation" again, honestly.

@joepie91 well, the implicit is moving the labour exploitation from genuine humans to artificial humans I'd assume?

as with military drones: moving losses from genuine humans to artificial "humans"

@raito It doesn't, though; unless there is a way for people to live without a job, all it will do is artificially increase competition for the same jobs, putting human workers into a worse negotiating position (= worse pay, worse work environment).

Military drones don't move the losses to artificial 'humans' either; the casualties on the other end are still just as human, and the scale of them increases as automation is applied.

@joepie91 well that's the inherent contradiction in what capitalistic-aligned folks have been pushing

to justify pushing for innovations and funding them, you say "oh but look life is going to be easier"

but then once lines of work have been successfully automated away or extinguished, the people who were in there were told to gtfo, no sharing of the generated wealth

in the countries where they decided to do it, there consistently were voices to remove social security again

@joepie91 universal basic income and all that stuff is really all the same

there's less and less work available for the society, but at the same time, everyone wants people to work more and more

At the same time, we do everything we can to automate work but we refuse to abandon the accomplishment of lifehood uniquely via the lens of "work", "career" etc.

Ultimately, with this trajectory, one can wonder whether the end result will be a net reduction of the size of humankind

@joepie91 finally, on the last paragraph, you are spot on to me

this is the blind spot for all these enthusiasts of making war casualtyless on the side of the Right™ and whatever for the side of the Wrong™

but also war has became professional, human losses are not acceptable anymore in public opinion, therefore… the march towards automation and artificial assets that does war by proxy seems unstoppable

@joepie91 I can certainly see some niche applications where you might want to replace a human with something less squishy, or less biased (good luck with that!).

But clearly anything we consider sentient should be automatically entitled to a right say "fuck that, I want to run a coffee shop".

Real AI is probably still relatively easier than cryosleep, so I can see us sending sentient space probes on thousand-year missions and hoping they don't hate us for it and send a postcard once in a while. On Earth? Yeah, probably cheaper to pay humans a living wage in 99.99% of scenarios, and nothing sentient will want the remainder anyway.

@joepie91 I think my biggest problem with the whole stupid roko's basilisk thing is.

You have to be really dumb and small-minded to think that a hyper-intelligence would be capable of being vengeful. It's fucking irrational and pointless.

@virtulis But we already *have* less squishy and less biased systems; computers can do that just fine, given careful application.

So why do we need "AI"?

Especially considering that trying to recreate sentience is more likely to *reintroduce* bias, because biases, however frustrating, do actually play a role in survival and development.

@joepie91 well, exactly that. We could attempt engineering a "good" bias. I don't think we even need "laws of robotics", just basic empathy, which is way easier than sentience (see dogs).

By less squishy I imagine:

Not requiring oxygen or human energy sources, or otherwise more environment-resilient. On Earth easily solvable with remote control, but as latency grows some*one* better suited for it might be worth considering.
Serializable and/or modifiable. Again, I think engineering a "re-printable" sentience from scratch is probably easier than the magical instant human backup and cloning trope. So some*one* that can literally "unsee" things without years of therapy. Quite abuse-prone though.

Yeah I think that's about all I got.

@joepie91 tangent: I think it's interesting to think in terms of probabilities disregarding time sometimes.

Like, objectively and realistically speaking, artificial sentience, faster-than-light travel/comms, cryosleep/cloning/etc all seem equally fantastic and unlikely to ever happen (let alone our lifetime).

But also objectively, one of these is not like the others, because sentience exists and is not magic, so the question is only of making some more of the same. So idk, I guess it's on the very fringe of something worth talking about sometimes?

But definitely not with those people.

@joepie91 The people trying to build "AI" don't necessarily need to believe that it's a goal to strive for. They think that _someone_ will build AI inevitably, and the best way to have a say in how nice it's going to be is to join the accelerationism cult.

brainworms 

@fionafokus you mean like... like rokos basilisk but every team believes in a different basilisk they work to materialize before the others get theirs?

@joepie91
I like the idea of an "expert in a box". Package up a domain of knowledge and give it a conversational frontend.
An encyclopaedia (or white paper, rubber ducky, etc) i can query interactively, asking better questions to refine my own understanding.
The conversational frontend must know its limits and be able to respond "can't respond to that", which is one crucial failure of LLMs.

@joepie91
I think that crucial level of introspection requires (or itself actually is) a form of general intelligence.

@silvermoon82 I feel like this elides the much more important question: why do we need this? Why aren't we instead building better (technical and social) structures for the experts we *already have* and who *already* want to help others, to provide that kind of assistance?

@joepie91
"Why do we need this" vs "why do we need to make this" are two very different and important questions.
I think humans have a certain drive to be tool makers; even if the tool is unnecessary or harmful, the act of making it satisfies something human.

@joepie91 I mean, pretty much the key idea I want to elaborate on in this AI thing I've been writing is that the kind of AI we want that isn't just creating artificial humans is impossible

you can't make an effective problem solver that isn't an active, conscious participant in society since most problems are entirely social. so, AI that isn't at the same level of an artificial human isn't gonna work

and most people don't know this, but we actually can make something with the problem-solving capacity of a human. it's called a human. we can make those

and if you want evidence on why "like a human, but missing one key aspect of humanity" forms of AI are bad, basically every combination of that has been discussed at length in science fiction

@joepie91 AI never just meant AGI, but...

it would be a so much richer experience to be a mind with instant access to out computational tools. just spin ou a simulation for every mathematical/computational question you might have. they could see with a thousand eyes and act with a thousand hands. memories and experience cound be modular and sharable.

and their mind could have offsite backups. death could largely be history.

@joepie91 and it's just that the world (or society or whatever) is suffering terribly from a lack of intelligence. it's not like were making great use of the intellingence that's there, but if people were just be a little smarter, entire categories of problems would be bascially solved overnight. like religion and government. there would still be other, more interersting problems we'd have to deal with, we wound't be on the brink of self-extinction because of "who would build the roads".

@sofia Is all of this actually about *artificial* intelligence, though? Because it reads more to me like it is about augmentation of natural intelligence.

(Aside, "AGI" is a term that didn't exist until pretty recently, and it's what "AI" used to mean originally, also in modern AI research)

@joepie91 sure, it doesn't really matter how you get to it (though getting there in more than one way is probably desirable, because of robustness and diversity).

i feel like many of these things seem intuitively easier to implement in software, but i woudn't complain if we had, like, brain eingineeers working on superintelligence either. copying them for backups and sharing seems among the rather more tricky aspects.

@joepie91 i think in the early days AI folks tended to massively underestimate the complexity of seemingly easy thought processes. what their shiny new toys coud do just seemed way more impressive than recognizing a picture of a cat, etc.

the term "strong AI" and superintelligence are older. still i think if you asked AI folks back then they would say the things they make are AI, nor precursors or attempts at AI.

Sign in to participate in the conversation
Pixietown

Small server part of the pixie.town infrastructure. Registration is closed.