<br><br><div><span class="gmail_quote">On 16/06/07, <b class="gmail_sendername">Thomas</b> <<a href="mailto:thomas@thomasoliver.net">thomas@thomasoliver.net</a>> wrote:<br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Building mutual appreciation among humans has been spotty, but making<br>friends with SAI seems clearly prudent and might bring this ethic<br>into proper focus. Who dominates may not seem so relevant to beings<br>who lack our brain stems. The nearly universal ethic of treating the
<br>other guy like you'd prefer if you were in her shoes might get us off<br>to a good start. Perhaps, if early AI were programmed to treat us<br>that way, we could finally learn that ethic species-wide --<br>especially if they were programmed for human child rearing. That
<br>strikes me as highly likely. -- Thomas<br></blockquote></div><br>If the AI has no preference for being treated in the ways that animals with bodies and brains do, then what would it mean to treat others in the way it would like to be treated? You would have to give it all sorts of negative emotions, like greed, pain, and the desire to dominate, and then hope to appeal to its "ethics" even though it was smarter and more powerful than you.
<br><br><br><br>-- <br>Stathis Papaioannou