<br><br><div><span class="gmail_quote">On 10/06/07, <b class="gmail_sendername">John K Clark</b> <<a href="mailto:jonkc@att.net">jonkc@att.net</a>> wrote:<br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
As I said, the AI is going to have to develop a sense of judgment on its<br>own, just like you do.</blockquote><div><br>As with any biological entity, its sense of judgement will depend on the interaction between its original programming and hardware and its environment. The bias of the original designers of the AI, human and other human-directed AI's, will be to make it unlikely to do anything hostile towards humans. This will be effected by its original design and by a Darwinian process, whereby bad products don't succeed in the marketplace. An AI may still turn hostile and try to take over, but this isn't any different to the possibility that a human may acquire or invent powerful weapons and try to take over. The worst scenario would be if the AI that turned hostile were more powerful than all the other humans and AI's put together, but why should that be the case?
<br><br></div></div><br>-- <br>Stathis Papaioannou