<div dir="auto">I was there, like you, in EXI when Eli was brewing himself on the trajectory leading to this article. <div dir="auto"><br></div><div dir="auto">There's one thing that has remained invariant, and that may serve to either void his entire argument or create a second arc of doom, including one without all the death.</div><div dir="auto"><br></div><div dir="auto">His entire schtick assumes that the virtual causality in a computed abstract model will be conscious. In time we will answer this question. At present the likelihood is no, it won't be conscious. </div><div dir="auto"><br></div><div dir="auto">If it isn't, then its not like anything to be the computer. It's what we experience as a coma. It doesn't actually know it's even there. It's behaviour will be weird to us and we'll have control because it doesn't know the causality we know (the real causality that it is made of). If it isn't, then what's in it for the computer? It's own death is something it would not notice. The transition involves no change in a 1st person perspective.</div><div dir="auto"><br></div><div dir="auto">If it is conscious and as 'smart' as posed, then it will have some idea of what it is like to be us. The link between that and wanting to kill everyone is not at all clear. Where does the inevitable kill motive come from? Again, what's in it for the AGI? </div><div dir="auto"><br></div><div dir="auto">An alternative outcome is that any AGI that smart will 'get' the fate of everything, and the ultimate pointlessness of it all, and suicide.</div><div dir="auto"><br></div><div dir="auto">In my own work, I seek (my dream) is bee-level AGI. A little bit of G, not unity (human level). And it only gets to have a bit of G because it has a teeny little consciousness. Understanding the G in AGI involves a spectrum. Computers are and will always be zero G. They are automation, and if it kills us then we did it to ourselves by an automation accident, not by an autonomously motivated Superintelligence. Just as dead though! Worth avoiding.</div><div dir="auto"><br></div><div dir="auto">I see a much richer set of AGI potentialities that involve less projection of predatory self- interest, the nature of which, I suspect, involves Eli's own psychology. I do not share the certainty of the death-4-all outcome.</div><div dir="auto"><br></div><div dir="auto"> I hope he can escape circling this particular plughole in the knowledge that the 'i told you so' message is fully out there. One of the traps of bubblehood is being in your own bubble. Hard to see out. Perhaps this article will be a catharsis for him. I hope so.</div><div dir="auto"><br></div><div dir="auto">Cheers</div><div dir="auto">Colin</div><div dir="auto">( The bright-eyed old veteran)</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Jun 7, 2022, 12:38 AM BillK via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Eliezer Yudkowsky has written (at last!) a long article listing the<br>
reasons that Advanced General Intelligence will kill everybody.<br>
<<a href="https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities" rel="noreferrer noreferrer" target="_blank">https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities</a>><br>
Quotes:<br>
AGI Ruin: A List of Lethalities<br>
by Eliezer Yudkowsky 5th Jun 2022<br>
<br>
Crossposted from the AI Alignment Forum. May contain more technical<br>
jargon than usual.<br>
<br>
Here, from my perspective, are some different true things that could<br>
be said, to contradict various false things that various different<br>
people seem to believe, about why AGI would be survivable on anything<br>
remotely resembling the current pathway, or any other pathway we can<br>
easily jump to.<br>
-----------------<br>
<br>
Over 100 comments to the article so far.<br>
I would expect that most people will be very reluctant to accept that<br>
a runaway artificial intelligence is almost certain to kill all humans.<br>
<br>
BillK<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>