[ExI] AGI is going to kill everyone

Colin Hales col.hales at gmail.com
Mon Jun 6 22:13:40 UTC 2022


I was there, like you, in EXI when Eli was brewing himself on the
trajectory leading to this article.

There's one thing that has remained invariant, and that may serve to either
void his entire argument or create a second arc of doom, including one
without all the death.

His entire schtick assumes that the virtual causality in a computed
abstract model will be conscious. In time we will answer this question. At
present the likelihood is no, it won't be conscious.

If it isn't, then its not like anything to be the computer. It's what we
experience as a coma. It doesn't actually know it's even there. It's
behaviour will be weird to us and  we'll have control because it doesn't
know the causality we know (the real causality that it is made of). If it
isn't, then what's in it for the computer? It's own death is something it
would not notice. The transition involves no change in a 1st person
perspective.

If it is conscious and as 'smart' as posed, then it will have some idea of
what it is like to be us. The link between that and wanting to kill
everyone is not at all clear. Where does the inevitable kill motive come
from? Again, what's in it for the AGI?

An alternative outcome is that any AGI that smart will 'get' the fate of
everything, and the ultimate pointlessness of it all, and suicide.

In my own work, I seek (my dream) is bee-level AGI. A little bit of G, not
unity (human level). And it only gets to have a bit of G because it has a
teeny little consciousness.  Understanding the G in AGI involves a
spectrum. Computers are and will always be zero G. They are automation, and
if it kills us then we did it to ourselves by an automation accident, not
by an autonomously motivated Superintelligence. Just as dead though! Worth
avoiding.

I see a much richer set of AGI potentialities that involve less projection
of predatory self- interest, the nature of which, I suspect, involves Eli's
own psychology. I do not share the certainty of the death-4-all outcome.

 I hope he can escape circling this particular plughole in the knowledge
that the 'i told you so' message is fully out there. One of the traps of
bubblehood is being in your own bubble. Hard to see out. Perhaps this
article will be a catharsis for him. I hope so.

Cheers
Colin
( The bright-eyed old veteran)




On Tue, Jun 7, 2022, 12:38 AM BillK via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Eliezer Yudkowsky has written (at last!) a long article listing the
> reasons that Advanced General Intelligence will kill everybody.
> <
> https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
> >
> Quotes:
> AGI Ruin: A List of Lethalities
> by Eliezer Yudkowsky 5th Jun 2022
>
> Crossposted from the AI Alignment Forum. May contain more technical
> jargon than usual.
>
> Here, from my perspective, are some different true things that could
> be said, to contradict various false things that various different
> people seem to believe, about why AGI would be survivable on anything
> remotely resembling the current pathway, or any other pathway we can
> easily jump to.
> -----------------
>
> Over 100 comments to the article so far.
> I would expect that most people will be very reluctant to accept that
> a runaway artificial intelligence is almost certain to kill all humans.
>
> BillK
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220607/fb2f4962/attachment.htm>


More information about the extropy-chat mailing list