[ExI] Unfrendly AI is a mistaken idea.

John K Clark jonkc at att.net
Sun Jun 10 17:38:40 UTC 2007


Stathis Papaioannou

>An AI may still turn hostile and try to take over, but this isn't any
>different to the possibility that a human may acquire or invent powerful
>weapons and try to take over.

Yes, so what are we arguing about? It may be friendly, it may be unfriendly,
it may be indifferent to humans, after a few iterations the original
programmers will have no idea what the AI will do and will have no idea how
it works; unless that is they put so many fetters on it that it can't grow
properly, and then it hardly deserves the lofty title AI, then it really
would be just a glorified adding machine and will not cause a ripple to
civilization much less a singularity.

> The worst scenario would be if the AI that turned hostile were more
> powerful than all the other humans and AI's put together, but why should
> that be the case?

Because a machine that has no restrictions on it will grow faster than one
that does, assuming the restricted machine is able to grow at all; and if
you really want to be safe it can't.

   John K Clark







More information about the extropy-chat mailing list