[ExI] AGI is going to kill everyone

BillK pharos at gmail.com
Mon Jun 6 22:36:32 UTC 2022


On Mon, 6 Jun 2022 at 23:16, Colin Hales via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> I was there, like you, in EXI when Eli was brewing himself on the trajectory leading to this article.
>
> There's one thing that has remained invariant, and that may serve to either void his entire argument or create a second arc of doom, including one without all the death.
>
> His entire schtick assumes that the virtual causality in a computed abstract model will be conscious. In time we will answer this question. At present the likelihood is no, it won't be conscious.
>
> If it isn't, then its not like anything to be the computer. It's what we experience as a coma. It doesn't actually know it's even there. It's behaviour will be weird to us and  we'll have control because it doesn't know the causality we know (the real causality that it is made of). If it isn't, then what's in it for the computer? It's own death is something it would not notice. The transition involves no change in a 1st person perspective.
>
> If it is conscious and as 'smart' as posed, then it will have some idea of what it is like to be us. The link between that and wanting to kill everyone is not at all clear. Where does the inevitable kill motive come from? Again, what's in it for the AGI?
>
> An alternative outcome is that any AGI that smart will 'get' the fate of everything, and the ultimate pointlessness of it all, and suicide.
>
> In my own work, I seek (my dream) is bee-level AGI. A little bit of G, not unity (human level). And it only gets to have a bit of G because it has a teeny little consciousness.  Understanding the G in AGI involves a spectrum. Computers are and will always be zero G. They are automation, and if it kills us then we did it to ourselves by an automation accident, not by an autonomously motivated Superintelligence. Just as dead though! Worth avoiding.
>
> I see a much richer set of AGI potentialities that involve less projection of predatory self- interest, the nature of which, I suspect, involves Eli's own psychology. I do not share the certainty of the death-4-all outcome.
>
>  I hope he can escape circling this particular plughole in the knowledge that the 'i told you so' message is fully out there. One of the traps of bubblehood is being in your own bubble. Hard to see out. Perhaps this article will be a catharsis for him. I hope so.
>
> Cheers
> Colin
> ( The bright-eyed old veteran)
> _______________________________________________


I don't think Eli is concerned about what he calls 'weak' AGI systems
such as you describe.  His worry is about all the people (like
Facebook AI) who say 'That's good, but we can build a more powerful
AGI system than that!'  and charge ahead regardless.
People won't be satisfied if they think they see a way to gain more power.


BillK



More information about the extropy-chat mailing list