[ExI] Unfrendly AI is a mistaken idea.

Stathis Papaioannou stathisp at gmail.com
Sun Jun 10 05:40:28 UTC 2007


On 10/06/07, John K Clark <jonkc at att.net> wrote:

As I said, the AI is going to have to develop a sense of judgment on its
> own, just like you do.


As with any biological entity, its sense of judgement will depend on the
interaction between its original programming and hardware and its
environment. The bias of the original designers of the AI, human and other
human-directed AI's, will be to make it unlikely to do anything hostile
towards humans. This will be effected by its original design and by a
Darwinian process, whereby bad products don't succeed in the marketplace. An
AI may still turn hostile and try to take over, but this isn't any different
to the possibility that a human may acquire or invent powerful weapons and
try to take over. The worst scenario would be if the AI that turned hostile
were more powerful than all the other humans and AI's put together, but why
should that be the case?


-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070610/0d130caa/attachment.html>


More information about the extropy-chat mailing list