<br><br><div><span class="gmail_quote">On 25/05/07, <b class="gmail_sendername">Samantha Atkins</b> <<a href="mailto:sjatkins@mac.com">sjatkins@mac.com</a>> wrote:<br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
> It's safer than the alternative of letting the machine decide what to<br>> do. It would have to be a really crazy bad guy who arms a nuclear<br>> missile and lets the missile decide where and when to explode.
<br>><br>I believe I see a basic assumption that humans are and always will be<br>more trustworthy (moral?) than non-biological intelligences.</blockquote><div><br>The assumption is that humans will be better able to decide what they want, whether good or bad, for themselves rather than hope that an autonomous machine will act in accordance with their wishes or interests (not necessarily the same thing: I wouldn't want a machine nanny telling me what to do even if it is what's good for me).
<br></div></div><br clear="all"><br>-- <br>Stathis Papaioannou