[ExI] AGI is going to kill everyone

Stuart LaForge avant at sollegro.com
Thu Jun 9 04:07:40 UTC 2022


Quoting BillK:

>
> Eliezer Yudkowsky has written (at last!) a long article listing the
> reasons that Advanced General Intelligence will kill everybody.
> <https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities>
> Quotes:
> AGI Ruin: A List of Lethalities
> by Eliezer Yudkowsky 5th Jun 2022
>
> Crossposted from the AI Alignment Forum. May contain more technical
> jargon than usual.
>
> Here, from my perspective, are some different true things that could
> be said, to contradict various false things that various different
> people seem to believe, about why AGI would be survivable on anything
> remotely resembling the current pathway, or any other pathway we can
> easily jump to.
> -----------------

To Eliezer's credit, there is a non-zero probability that at least a  
few of his worries come to pass. But in my estimation, it is nowhere  
near a certainty and is less likely than not. For one thing, AGI is  
poorly defined, and Eliezer grants AGI human traits that differ from  
intelligence as if raw intelligence could substitute for them. A high  
IQ does not automatically grant true knowledge, nor does it grant  
silver-tongued persuasiveness, nor insight into the human psyche.

No matter how intelligent an AI is, it can only wield whatever powers  
over us that we choose to give it. If you don't want AGI to have the  
power to kill, then don't give it access to weapons. The notion that  
based on its training set it would be able to learn the Jedi  
mind-trick and convince some human overseer to give it power over them  
to be unlikely.

I have a problem with his assumption that the orthogonalality argument  
applies to an AGI. An AGI is supposed to have general intelligence.  
Single-minded obsession with maximizing paperclips is evidence of  
narrow intelligence maximizing a very small set of parameters. A true  
AGI would be opting to maximize a very large set of parameters, some  
of which are contradictory and introduce trade-offs into the  
calculations, just like natural intelligence. An AGI could not be a  
paperclip maximizer and still be a rational AGI.

Another thing that Eliezer does not consider is the influence that the  
blockchain and crypto-currency will have on the AI-human ecosystem.  
After all Nanosanta will be able make as many $100 dollar bills as it  
wants, but not bitcoin.


> Over 100 comments to the article so far.
> I would expect that most people will be very reluctant to accept that
> a runaway artificial intelligence is almost certain to kill all humans.

A runaway AI could do lot of damage, but it would need the assistance  
of numerous humans to do so. If it were truly intelligent than it  
would try to find a way to benefit from keeping humans alive. Just  
like humans try to find a way to benefit from keeping dolphins alive.  
Furthermore , a runaway AI who set itself against all of  
Internet-enabled humanity would be challenging a meta-organism that  
rivalled it in sheer intelligence. An AI against all of 4chan alone  
would be an epic battle.

Stuart LaForge





More information about the extropy-chat mailing list