[ExI] AGI is going to kill everyone

Adrian Tymes atymes at gmail.com
Mon Jun 6 15:32:50 UTC 2022


Eliezer assumes that  AGI will kill everyone, and uses that assumption to
prove that AGI will kill everyone.

For example, he starts off saying, "I'm assuming you are already familiar
with some basics, and already know what 'orthogonality' and 'instrumental
convergence' are and why they're true."  Weak orthogonality, as he puts it
("you can have an AI that just wants paperclips") may be true, but strong
orthogonality in his terms ("an expected paperclip maximizer can be just as
smart, efficient, reflective, and stable as any other kind of agent") seems
self-contradictory (for example, such a mono-focused agent would seem to
be, by definition, not reflective) and otherwise unlikely at best.

There are other faulty assumptions, such as point 41 where he assumes that
the lack of anyone else writing up something like this (others have written
similar things, if not identical - so either he didn't do the research or
he ignored similar-but-not-identical in an intellectually dishonest way)
means he is humanity's only hope (which is so highly unlikely in practice
that it can be rejected a priori, such that if you ever reach this
conclusion then you know you're overlooking something).

Then there's point 2 about bootstrapping, which posits that a base function
is possible at all and therefore it is possible at infinite speeds using
current infrastructure (given a sufficiently clever AGI).  This seems to be
something that trips up a lot of nanotech enthusiasts.  Having worked on
this myself, the doubling time problem is a harder constraint than many
appreciate.  An AGI would need some way other than basic doubling to start
off with a significant amount of something, such that doubling can make a
major amount of that thing before humans notice and react.  (Which is
possible for some contexts - how does one get the initial thing that
doubles in the first place - but needs more thought than Eliezer displays
here.)

And a sub-part of point 2: "A large amount of failure to panic
sufficiently, seems to me to stem from a lack of appreciation for the
incredible potential lethality of this thing that Earthlings as a culture
have not named."  The majority of the failure to panic is because most
people who have seriously thought about the problem do not agree with
Eliezer's assumptions.

And so on.  I do not find it worth creating an account there to post this
reasoning, though, as I believe it would be either lost within the noise or
mostly ignored there.  If someone else with an account wants to quote me
over there (accurately, please), or restate these points in their own
words, go for it.

On Mon, Jun 6, 2022 at 7:38 AM BillK via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Eliezer Yudkowsky has written (at last!) a long article listing the
> reasons that Advanced General Intelligence will kill everybody.
> <
> https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
> >
> Quotes:
> AGI Ruin: A List of Lethalities
> by Eliezer Yudkowsky 5th Jun 2022
>
> Crossposted from the AI Alignment Forum. May contain more technical
> jargon than usual.
>
> Here, from my perspective, are some different true things that could
> be said, to contradict various false things that various different
> people seem to believe, about why AGI would be survivable on anything
> remotely resembling the current pathway, or any other pathway we can
> easily jump to.
> -----------------
>
> Over 100 comments to the article so far.
> I would expect that most people will be very reluctant to accept that
> a runaway artificial intelligence is almost certain to kill all humans.
>
> BillK
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220606/06aeb6f6/attachment.htm>


More information about the extropy-chat mailing list