[ExI] Universally versus 'locally' Friendly AI

Samantha Atkins sjatkins at mac.com
Mon Mar 14 00:15:08 UTC 2011


On Mar 13, 2011, at 12:51 PM, Ben Zaiboc wrote:

> Kelly Anderson <kellycoinguy at gmail.com> declared:
> 
>> If we screw up on the first generation of AGI, then
>> humanity is toast, IMHO.
> 
> If we screw up, and if we don't screw up, Humanity, as it is now (circa 2011), will be toast.

When I hear this argument re getting the first AGI right I am increasingly unimpressed.   

First, there is only a very limited amount of guaranteed not screwing up something so much on the edge of our abilities that is remotely possible for beings with our limitations.   The notion that we can not only build an AGI but one that provably will always do the "best thing" (which we can't define well) regarding us no matter how it improves, grows, and expands upon its original state and goal set is simply incredible to me.  We cannot even guarantee the behavior across all conditions of perfectly mundane, relatively simple computational systems.   

Second, it is not at all clear to me that there is any chance of continued human survival under accelerating change without inventing AGI within the next twenty to thirty years max.  The reason for this opinion is the limits of human cognitive capacity and inter-working with one another and non-AGI computational systems.  The capacity of this combined system, especially in terms of timely good decision making and implementation, is not at all boundless.  At some point the requirements of what is needed will certainly go beyond our pre-AGI capacity.  Many days, I am nearly convinced that our capacity is already grossly inadequate to current challenges.   If this is so then AGI is our only real hope.   Enhancement of ourselves and our systems will help but not as quickly in potential as skipping substrate.






> 
> Humanity as it is now is a path on top of a hill that is getting narrower and narrower, with the fall-off on each side getting steeper and steeper.  How long we can keep walking along the path without falling off is unknown, and which side we will fall down is also unknown.  The only thing that is certain, is that the path will get so narrow that nobody can stay on it, and it will end sooner or later.  Some of us (transhumanists) are rooting for one side, some (luddites, bioconservatives) for the other.  Most people just shut their eyes and keep walking, in the hope that the path will continue forever.  It won't.

Yep.   We are at the species challenge point where our evolved characteristics and limits are increasingly inadequate to the environment we find ourselves within.

- samantha





More information about the extropy-chat mailing list