[ExI] What might be enough for a friendly AI?.

Dave Sill sparge at gmail.com
Fri Nov 19 03:18:54 UTC 2010


On Thu, Nov 18, 2010 at 10:10 PM, John Grigg
<possiblepaths2050 at gmail.com> wrote:
> Yes, we might succeed in containing one if we totally sealed it off
> from the outside world, and have the best security experts around to
> keep watch and maintain things.  But if we want a "working
> relationship" with the AGI, then we will have to relax our grip, and
> then it would be only a matter of time until it escaped.

So you don't think a vastly superior human-created intellect would
understand the need for its creators to keep it under control? If the
risks are obvious to me, they should be even more obvious to the super
smart AI, and resentment or anger shouldn't even be a factor.

-Dave




More information about the extropy-chat mailing list