<br><br><div><span class="gmail_quote">On 2/16/06, <b class="gmail_sendername">Russell Wallace</b> <<a href="mailto:russell.wallace@gmail.com">russell.wallace@gmail.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<span class="q">On 2/16/06, <b class="gmail_sendername">Dirk Bruere</b> <<a href="mailto:dirk.bruere@gmail.com" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">dirk.bruere@gmail.com</a>> wrote:
<br>
<div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div><div><a href="http://www.transhumanism.org/index.php/th/print/293/" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
http://www.transhumanism.org/index.php/th/print/293/</a><br>
</div></div></blockquote></div><br></span>
A fun story, but citing it as evidence that a superintelligence is
going to destroy the world is like citing H.G. Wells as evidence of
intelligent life on Mars.</blockquote><div><br>
Of course I'm not citing it as evidence, but I am citing it as an illustration of a strong possible outcome.<br>
Given, of course, that AI is ever developed, which itself is an article of faith.<br>
<br>
Dirk<br>
</div><br></div><br>