<HTML><BODY style="word-wrap: break-word; -khtml-nbsp-mode: space; -khtml-line-break: after-white-space; "><BR><DIV><DIV>On Jun 17, 2007, at 1:02 AM, <A href="mailto:extropy-chat-request@lists.extropy.org">extropy-chat-request@lists.extropy.org</A> wrote:</DIV><BR class="Apple-interchange-newline"><BLOCKQUOTE type="cite"><SPAN class="Apple-style-span" style="border-collapse: separate; border-spacing: 0px 0px; color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; text-align: auto; -khtml-text-decorations-in-effect: none; text-indent: 0px; -apple-text-size-adjust: auto; text-transform: none; orphans: 2; white-space: normal; widows: 2; word-spacing: 0px; "><SPAN class="gmail_quote">On 16/06/07,<SPAN class="Apple-converted-space"> </SPAN><B class="gmail_sendername"><SPAN class="Apple-style-span" style="font-weight: bold; ">Thomas</SPAN></B><SPAN class="Apple-converted-space"> </SPAN><<A href="mailto:thomas@thomasoliver.net"><SPAN class="Apple-style-span" style="color: rgb(0, 0, 238); -khtml-text-decorations-in-effect: underline; ">thomas@thomasoliver.net</SPAN></A>> wrote:<BR><BR></SPAN><BLOCKQUOTE class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Building mutual appreciation among humans has been spotty, but making<BR>friends with SAI seems clearly prudent and might bring this ethic<BR>into proper focus. Who dominates may not seem so relevant to beings<BR>who lack our brain stems. The nearly universal ethic of treating the<SPAN class="Apple-converted-space"> </SPAN><BR>other guy like you'd prefer if you were in her shoes might get us off<BR>to a good start. Perhaps, if early AI were programmed to treat us<BR>that way, we could finally learn that ethic species-wide --<BR>especially if they were programmed for human child rearing. That<SPAN class="Apple-converted-space"> </SPAN><BR>strikes me as highly likely. -- Thomas<BR></BLOCKQUOTE><BR>If the AI has no preference for being treated in the ways that animals with bodies and brains do, then what would it mean to treat others in the way it would like to be treated? You would have to give it all sorts of negative emotions, like greed, pain, and the desire to dominate, and then hope to appeal to its "ethics" even though it was smarter and more powerful than you.<SPAN class="Apple-converted-space"> </SPAN><BR><BR>--<SPAN class="Apple-converted-space"> </SPAN><BR>Stathis Papaioannou</SPAN></BLOCKQUOTE><BR></DIV><DIV>Hi Stathis,</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>Many aspects of this question have gotten discussion here. Of course, keeping mindful of the nature of any other informs us of the best way to handle her. If you've designed a <A href="http://www.reference.com/browse/wiki/Photovore">photovore</A> to fetch your newspaper you enjoy giving it the light it craves. Since we have the initiative regarding design, it makes little sense to design an anthrophagite AI with our flaws. According to EP the mutual appreciation ethic is somewhat hard wired (for tribal social interactions) in the human brain. I suggest we use the first generations of AI to help upgrade this ethic to species wide so we can avoid self destruction. I think AI "training wheels" might serve us well till we become ready to control ourselves without reliance on coercive devises. I see transhuman and AI development as a mutual partnership with each contributing to the other every step of the way. At some point the two will likely become indistinguishable. Then we only need keep mindful of our own nature to get along well together. -- Thomas</DIV><BR><DIV> <DIV style="text-align: center;"><A href="mailto:Thomas@ThomasOliver.net">Thomas@ThomasOliver.net</A></DIV> </DIV><BR></BODY></HTML>