On 10/7/06, <b class="gmail_sendername">Samantha Atkins</b> <<a href="mailto:sjatkins@mac.com">sjatkins@mac.com</a>> wrote:<div><span class="gmail_quote"></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div style="">
Actually what I know of his opinion in simply that when AGI arrives humans, even uploaded and augmented post- but former humans, will eventually not be competitive in comparison. He has said that if humanity is to disappear someday then he would much rather that it be because it is replaced by something more intelligent. He has said that in his opinion this would not be such a bad outcome. I don't see anything all that objectionable in that. Nor do I see much room to challenge his conclusion about humans relative to AIs in competitiveness.
</div></blockquote></div><br>
I see plenty of room to challenge it, starting with, even if you
postulate the existence of superintelligent AI in some distant and
unknowable future, why would anyone program it to start exterminating
humans? I'm certainly not going to do any such thing in the unlikely
event my lifespan extends that long. Then there's the whole assumption
that more intelligence keeps conferring more competitive ability all
the way up without limit, for which there is no evidence. There are
various arguments from game theory and offense versus defense. There
are a great many reasons to doubt the conclusion, even based on what I
can think of in 2006, let alone what else will arise that nobody has
thought of yet.<br>