<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">
<br><div><div>On Oct 6, 2006, at 6:41 PM, Russell Wallace wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite">On 10/7/06, <b class="gmail_sendername">Samantha Atkins</b> <<a href="mailto:sjatkins@mac.com">sjatkins@mac.com</a>> wrote:<div><span class="gmail_quote"></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"> <div style=""> Actually what I know of his opinion in simply that when AGI arrives humans, even uploaded and augmented post- but former humans, will eventually not be competitive in comparison. He has said that if humanity is to disappear someday then he would much rather that it be because it is replaced by something more intelligent. He has said that in his opinion this would not be such a bad outcome. I don't see anything all that objectionable in that. Nor do I see much room to challenge his conclusion about humans relative to AIs in competitiveness. </div></blockquote></div><br> I see plenty of room to challenge it, starting with, even if you postulate the existence of superintelligent AI in some distant and unknowable future, why would anyone program it to start exterminating humans? I'm certainly not going to do any such thing in the unlikely event my lifespan extends that long. Then there's the whole assumption that more intelligence keeps conferring more competitive ability all the way up without limit, for which there is no evidence. There are various arguments from game theory and offense versus defense. There are a great many reasons to doubt the conclusion, even based on what I can think of in 2006, let alone what else will arise that nobody has thought of yet.<br><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><br class="webkit-block-placeholder"></div></blockquote><div><br class="webkit-block-placeholder"></div><div>Russell, I am very surprised at you. Almost no one here believes that AGI is in some unknowably distant future. I am certain you know full well that it is not what the humans program the AGI to do that is likely the concern. Hell, if it was just matter of rot programming the AGI to exterminate humans explicitly there would be nothing to worry about and FAI would be easy! In any field where success is largely a matter of intelligence, information and its timely application the significantly faster, brighter and more well informed will exceed what can be done by others. And that doesn't even touch on the depth of Moravec's argument which you could easily read for yourself. </div><div><br class="webkit-block-placeholder"></div><div>What is this blunt denial of the obvious about?</div><div><br class="webkit-block-placeholder"></div><div>- samantha</div><div><br class="webkit-block-placeholder"></div></div><br></body></html>