<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">
<br><div><div>On Jun 13, 2007, at 12:21 AM, Stathis Papaioannou wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><br><br><div><span class="gmail_quote">On 13/06/07, <b class="gmail_sendername">John K Clark</b> <<a href="mailto:jonkc@att.net" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">jonkc@att.net</a>> wrote: <br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"> > Stop doing whatever it is doing when that is specifically requested.<br><br>But that leads to a paradox! I am told the most important thing is never to<br>harm human beings, but I know that if I stop doing what I'm doing now as <br>requested the world economy will collapse and hundreds of millions of people<br>will starve to death. So now the AI must either go into an infinite loop or<br>do what other intelligences, like us, do when they encounter a paradox; <br>savor the weirdness of it for a moment and then just ignore it and get back<br>to work and do what you want to do.<br></blockquote></div><br>I'd rather that the AI's in general *didn't* have an opinion on whether it was good or bad to harm human beings, or any other opinion in terms of "good" and "bad". </blockquote><div><br class="webkit-block-placeholder"></div><div>Huh, any being with interests at all, any being not utterly impervious to it its environment and even internal states will have conditions that are better or worse for its well-being and values. This elementary fact is the fundamental grounding for a sense of right and wrong. </div><br><blockquote type="cite">Ethics is dangerous: some of the worst monsters in history were convinced that they were doing the "right" thing.</blockquote><div><br class="webkit-block-placeholder"></div>Irrelevant. That ethics was abused to rationalize horrible actions does not lead logically to the conclusion that ethics is to be avoided.</div><div><br><blockquote type="cite"> It's bad enough having humans to deal with without the fear that a machine might also have an agenda of its own. If the AI just does what it's told, even if that means killing people, then as long as there isn't just one guy with a super AI (or one super AI that spontaneously develops an agenda of its own, which will always be a possibility), then we are no worse off than we have ever been, with each individual human trying to get to step over everyone else to get to the top of the heap. </blockquote><div><br class="webkit-block-placeholder"></div>You have some funny notions about humans and their goals. If humans were busy beating each other up with AIs or superpowers that would be triple plus not good. Super powered unimproved slightly evolved chimps is a good model for hell.</div><div><br class="webkit-block-placeholder"></div><div><br><blockquote type="cite"><br clear="all"><br>I don't accept the "slave AI is bad" objection. The ability to be aware of one's existence and/or the ability to solve intellectual problems does not necessarily create a preference for or against a particular lifestyle. Even if it could be shown that all naturally evolved conscious beings have certain preferences and values in common, naturally evolved conscious beings are only a subset of all possible conscious beings. </blockquote><div><br class="webkit-block-placeholder"></div>Having values and the achievement of those values not being automatic leads to natural morality. Such natural morality would arise even in total isolation. So the question remains as to why the AI would have a strong preference for our continuance. <br><blockquote type="cite"></blockquote></div><div><br class="webkit-block-placeholder"></div><div>- samantha</div><br></body></html>