[ExI] What might be enough for a friendly AI?.

John Grigg possiblepaths2050 at gmail.com
Fri Nov 19 03:29:46 UTC 2010


Dave Sill wrote:
>So you don't think a vastly superior human-created intellect would
>understand the need for its creators to keep it under control? If the
>risks are obvious to me, they should be even more obvious to the super
>smart AI, and resentment or anger shouldn't even be a factor.

Yes, it may very well understand the human perspective, but that does
not mean it accepts it! lol  And as for resentment and anger, another
classic AGI debate topic is whether these artificial minds will even
have emotions!  But if it does have a survival motivation, and
depending on how much it learns about human history & psychology, it
will be desperately looking for a means of escape.

John


On 11/18/10, Dave Sill <sparge at gmail.com> wrote:
> On Thu, Nov 18, 2010 at 10:10 PM, John Grigg
> <possiblepaths2050 at gmail.com> wrote:
>> Yes, we might succeed in containing one if we totally sealed it off
>> from the outside world, and have the best security experts around to
>> keep watch and maintain things.  But if we want a "working
>> relationship" with the AGI, then we will have to relax our grip, and
>> then it would be only a matter of time until it escaped.
>
> So you don't think a vastly superior human-created intellect would
> understand the need for its creators to keep it under control? If the
> risks are obvious to me, they should be even more obvious to the super
> smart AI, and resentment or anger shouldn't even be a factor.
>
> -Dave
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>




More information about the extropy-chat mailing list