[ExI] What might be enough for a friendly AI?.

John Grigg possiblepaths2050 at gmail.com
Fri Nov 19 03:10:36 UTC 2010


>I disagree. It's pretty easy to contain things if you're careful. A
>moron could have locked Einstein in a jail cell and kept him there
>indefinitely.

>-Dave

Imagine Einstein as a highly trained escape artist/martial artist/spy,
who is just looking for a means of escape from that jail cell and
biding his time.  How long do you think the moron jailer will keep him
there?  I would compare that scenario to humans keeping an AGI as
their indefinite prisoner.

Yes, we might succeed in containing one if we totally sealed it off
from the outside world, and have the best security experts around to
keep watch and maintain things.  But if we want a "working
relationship" with the AGI, then we will have to relax our grip, and
then it would be only a matter of time until it escaped.

John


On 11/18/10, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> On 18 November 2010 20:15, Dave Sill <sparge at gmail.com> wrote:
>> I disagree. It's pretty easy to contain things if you're careful. A
>> moron could have locked Einstein in a jail cell and kept him there
>> indefinitely.
>
> It again depends of what one means for intelligence, a concept which
> sounds desperatingly vague is this kind of debate.
>
> Einstein was probably a rather intelligent man, I have no reason to
> consider him especially astute.
>
> --
> Stefano Vaj
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>



More information about the extropy-chat mailing list