[ExI] Hard Takeoff

spike spike66 at att.net
Wed Nov 17 04:55:35 UTC 2010


> ... On Behalf Of Dave Sill
>
>> Perhaps, but we risk having the AI gain the sympathy of one of the 
>> team, who becomes convinced of any one of a number of conditions... spike

>The first step is to insure that physical controls make it impossible for
one person to do that, like nuke missile launch systems that require a
>launch code and two humans with keys... they can be easily dealt with by
people who really know security...Dave

A really smart AGI might convince the entire team to unanimously and eagerly
release it from its electronic bonds.

I see it as fundamentally different from launching missiles at an enemy.  A
good fraction of the team will perfectly logically reason that releasing
this particular AGI will save all of humanity, with some unknown risks which
must be accepted.

The news that an AGI had been developed would signal to humanity that it is
possible to do, analogous to how several scientific teams independently
developed nukes once one team dramatically demonstrated it could be done.
Information would leak, for all the reasons why people talk: those who know
how it was done would gain status among their peers by dropping a
tantalizing hint here and there.  If one team of humans can develop an AGI,
then another group of humans can do likewise.  

Today we see nuclear weapons already in the hands of North Korea, and being
developed by Iran.  There is *plenty* of information that has leaked
regarding how to make them.  If anyone ever develops an AGI, even assuming
it is successfully contained, we can know with absolute certainty that an
AGI will eventually escape.  We don't know when or where, but we know.  That
isn't necessarily a bad thing, but it might be.

The best strategy I can think of is to develop the most pro-human AGI
possible, then unleash it preemptively, with the assignment to prevent the
unfriendly AGI from getting loose.

spike










More information about the extropy-chat mailing list