<div dir="ltr">Eliezer assumes that AGI will kill everyone, and uses that assumption to prove that AGI will kill everyone.<div><br></div><div>For example, he starts off saying, "I'm assuming you are already familiar with some basics, and already know what 'orthogonality' and 'instrumental convergence' are and why they're true." Weak orthogonality, as he puts it ("you can have an AI that just wants paperclips") may be true, but strong orthogonality in his terms ("an expected paperclip maximizer can be just as smart, efficient, reflective, and stable as any other kind of agent") seems self-contradictory (for example, such a mono-focused agent would seem to be, by definition, not reflective) and otherwise unlikely at best.</div><div><br></div><div>There are other faulty assumptions, such as point 41 where he assumes that the lack of anyone else writing up something like this (others have written similar things, if not identical - so either he didn't do the research or he ignored similar-but-not-identical in an intellectually dishonest way) means he is humanity's only hope (which is so highly unlikely in practice that it can be rejected a priori, such that if you ever reach this conclusion then you know you're overlooking something).</div><div><br></div><div>Then there's point 2 about bootstrapping, which posits that a base function is possible at all and therefore it is possible at infinite speeds using current infrastructure (given a sufficiently clever AGI). This seems to be something that trips up a lot of nanotech enthusiasts. Having worked on this myself, the doubling time problem is a harder constraint than many appreciate. An AGI would need some way other than basic doubling to start off with a significant amount of something, such that doubling can make a major amount of that thing before humans notice and react. (Which is possible for some contexts - how does one get the initial thing that doubles in the first place - but needs more thought than Eliezer displays here.)</div><div><br></div><div>And a sub-part of point 2: "A large amount of failure to panic sufficiently, seems to me to stem from a lack of appreciation for the incredible potential lethality of this thing that Earthlings as a culture have not named." The majority of the failure to panic is because most people who have seriously thought about the problem do not agree with Eliezer's assumptions.</div><div><br></div><div>And so on. I do not find it worth creating an account there to post this reasoning, though, as I believe it would be either lost within the noise or mostly ignored there. If someone else with an account wants to quote me over there (accurately, please), or restate these points in their own words, go for it.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jun 6, 2022 at 7:38 AM BillK via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Eliezer Yudkowsky has written (at last!) a long article listing the<br>
reasons that Advanced General Intelligence will kill everybody.<br>
<<a href="https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities" rel="noreferrer" target="_blank">https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities</a>><br>
Quotes:<br>
AGI Ruin: A List of Lethalities<br>
by Eliezer Yudkowsky 5th Jun 2022<br>
<br>
Crossposted from the AI Alignment Forum. May contain more technical<br>
jargon than usual.<br>
<br>
Here, from my perspective, are some different true things that could<br>
be said, to contradict various false things that various different<br>
people seem to believe, about why AGI would be survivable on anything<br>
remotely resembling the current pathway, or any other pathway we can<br>
easily jump to.<br>
-----------------<br>
<br>
Over 100 comments to the article so far.<br>
I would expect that most people will be very reluctant to accept that<br>
a runaway artificial intelligence is almost certain to kill all humans.<br>
<br>
BillK<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>