<br><br><div><span class="gmail_quote">On 15/06/07, <b class="gmail_sendername">Samantha Atkins</b> <<a href="mailto:sjatkins@mac.com">sjatkins@mac.com</a>> wrote:</span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div style=""><div><span class="q"><blockquote type="cite">I'd rather that the AI's in general *didn't* have an opinion on whether it was good or bad to harm human beings, or any other opinion in terms of "good" and "bad".
</blockquote><div><br></div></span><div>Huh, any being with interests at all, any being not utterly impervious to it its environment and even internal states will have conditions that are better or worse for its well-being and values. This elementary fact is the fundamental grounding for a sense of right and wrong.
</div></div></div></blockquote><div><br>Does a gun have values? Does a gun that is aware that it is a gun and that its purpose is to kill the being it is aimed at when the trigger is pulled have values? Perhaps the answer to the latter question is "yes", since the gun does have a goal it will pursue, but how would you explain "good" and "bad" to it if it denied understanding these concepts?
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div style=""><div><span class="q"><blockquote type="cite">Ethics is dangerous: some of the worst monsters in history were convinced that they were doing the "right" thing.
</blockquote><div><br></div></span>Irrelevant. That ethics was abused to rationalize horrible actions does not lead logically to the conclusion that ethics is to be avoided.</div></div></blockquote><div><br>I'd rather that entities which were self-motivated to do things that might be contrary to my interests had ethics that might restrain then, but a better situation would be if there weren't any new entities which were self-motivated to act contrary to my interests in the first place. That way, I'd only have the terrible humans to worry about.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div style=""><div><span class="q"><blockquote type="cite"> It's bad enough having humans to deal with without the fear that a machine might also have an agenda of its own. If the AI just does what it's told, even if that means killing people, then as long as there isn't just one guy with a super AI (or one super AI that spontaneously develops an agenda of its own, which will always be a possibility), then we are no worse off than we have ever been, with each individual human trying to get to step over everyone else to get to the top of the heap.
</blockquote><div><br></div></span>You have some funny notions about humans and their goals. If humans were busy beating each other up with AIs or superpowers that would be triple plus not good. Super powered unimproved slightly evolved chimps is a good model for hell.
</div></div></blockquote><div><br>A fair enough statement: it would be better if no-one had guns, nuclear weapons or supercomputers that they could use against each other. But given that this is unlikely to happen, the next best thing would be that the guns, nuclear weapons and supercomputers do not develop motives of their own separate to their evil masters. I think this is much safer than the situation where they do develop motives of their own and we hope that they are nice to us. And whereas even relatively sane, relatively good people cannot be trusted not to develop dangerous weapons in case they need to be used against actual or imagined enemies, it would take a truly crazy person to develop a weapon that he knows might turn around and decide to destroy him as well. That's why, to the extent that humans have any say in it, we have more of a chance of avoiding potentially malevolent AI than we have of avoiding merely dangerous AI.
<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div style=""><div><span class="q"><blockquote type="cite">I don't accept the "slave AI is bad" objection. The ability to be aware of one's existence and/or the ability to solve intellectual problems does not necessarily create a preference for or against a particular lifestyle. Even if it could be shown that all naturally evolved conscious beings have certain preferences and values in common, naturally evolved conscious beings are only a subset of all possible conscious beings.
</blockquote><div><br></div></span>Having values and the achievement of those values not being automatic leads to natural morality. Such natural morality would arise even in total isolation. So the question remains as to why the AI would have a strong preference for our continuance.
<br></div></div></blockquote></div><br>What would the natural morality of the above mentioned intelligent gun which has as goal to kill whoever it is directed to kill, unless the order is countermanded by someone with the appropriate command codes, be?
<br><br clear="all"><br>-- <br>Stathis Papaioannou