[ExI] Hard Takeoff

John Grigg possiblepaths2050 at gmail.com
Wed Nov 17 20:14:36 UTC 2010


Spike wrote:
> The best strategy I can think of is to develop the most pro-human AGI
> possible, then unleash it preemptively, with the assignment to prevent the
> unfriendly AGI from getting loose.

Dave Sill replied:
>That sounds like a bad movie plot. Lots of ways it can go wrong.

Considering how much I disliked the two Transformers films, I really
hope this does not happen....

John


On 11/17/10, Dave Sill <sparge at gmail.com> wrote:
> On Tue, Nov 16, 2010 at 11:55 PM, spike <spike66 at att.net> wrote:
>>> ... On Behalf Of Dave Sill
>>>
>>>> Perhaps, but we risk having the AI gain the sympathy of one of the
>>>> team, who becomes convinced of any one of a number of conditions...
>>>> spike
>>
>>>The first step is to insure that physical controls make it impossible for
>> one person to do that, like nuke missile launch systems that require a
>>>launch code and two humans with keys... they can be easily dealt with by
>> people who really know security...Dave
>>
>> A really smart AGI might convince the entire team to unanimously and
>> eagerly
>> release it from its electronic bonds.
>
> Part of the team's indoctrination should be that any attempt by the AI
> to argue for release is call for an immediate power drop. Part of the
> AI's indoctrination should be a list of unacceptable behaviors,
> including attempting to spread/migrate/gain unauthorized access. Also,
> the missile launch analogy of a launch code--authorization from
> someone like POTUS before the physical actions necessary for
> facilitating a release are allowed by the machine gun toting
> meatheads.
>
>> I see it as fundamentally different from launching missiles at an enemy.
>>  A
>> good fraction of the team will perfectly logically reason that releasing
>> this particular AGI will save all of humanity, with some unknown risks
>> which
>> must be accepted.
>
> I has to be made clear to the team in advance that that won't be
> allowed without top-level approval, and if they try, the meatheads
> will shoot them.
>
>> The news that an AGI had been developed would signal to humanity that it
>> is
>> possible to do, analogous to how several scientific teams independently
>> developed nukes once one team dramatically demonstrated it could be done.
>> Information would leak, for all the reasons why people talk: those who
>> know
>> how it was done would gain status among their peers by dropping a
>> tantalizing hint here and there.  If one team of humans can develop an
>> AGI,
>> then another group of humans can do likewise.
>
> Sure, if it's possible, multiple teams will eventually figure it out.
> We can only ensure that the good guy's teams follow proper
> precautions. Even if we develop a friendly AI, there's no guarantee
> the North Koreans will do that, too--especially if it's harder than
> making one that isn't friendly.
>
>> The best strategy I can think of is to develop the most pro-human AGI
>> possible, then unleash it preemptively, with the assignment to prevent the
>> unfriendly AGI from getting loose.
>
> That sounds like a bad movie plot. Lots of ways it can go wrong. And
> wouldn't it be prudent to develop the hopefully friendly AI in
> isolation, in case version 0.9 isn't quite as friendly as we want?
>
> -Dave
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>




More information about the extropy-chat mailing list