<br><br><div><span class="gmail_quote">On 12/21/05, <b class="gmail_sendername">Samantha Atkins</b> <<a href="mailto:sjatkins@mac.com">sjatkins@mac.com</a>> wrote:</span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div><span class="q"><blockquote type="cite"><div style="margin: 0px;">If we, being as human as</div><div style="margin: 0px;">we are, can be so apathetic about the survival of our</div><div style="margin: 0px;">own species, why would anyone believe that some mighty
</div><div style="margin: 0px;">non-human intelligence would give a rat's ass about</div><div style="margin: 0px;">us? <br></div></blockquote><div><br></div></span><div>Good question. But not relevant as to whether we must build such intelligence if we are to have much of a chance of continuing at all.
</div></div></blockquote><div><br>Actually Samantha, I view there as being few, if any, barriers to current instantiation humans ensuring the "survival of our own species". We do *not* need to become more intelligent. We need to use the intelligence we have at our disposal more effectively or creatively. For example -- the full nanotech vision can be realized using current human intelligence. It is essentially nothing more than a re-execution of the development of the semiconductor industry over the last 20-30 years. We already have various means with regard to detecting and reducing most short term hazards (NEOs, etc.). We have several billion years how to figure out what to do in the longer term (
i.e. when the sun exhausts its fuel supply).<br><br>Could a single human ensure our survival? I doubt it. Could we as a group? Quite probably. The question becomes *who* survives and whether whatever strategies are followed are "optimal"? This goes back to the famous Star Trek question -- when do "The needs of the many outweigh the needs of the few (or the one)."
<br><br>And so one must ask if one is going to rely on a "super-intelligence" to "save" us -- how does one guarantee that a super-intelligence is (or will remain) so altruistic that our "optimal" survival will be the end result? Indeed one could make the case that any entity which is not acting primarily with its own survival as its primary goal isn't really "intelligent". (The male spiders which sacrifice themselves to contribute protein to female spiders so as to increase numbers of offspring come to mind...)
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div><span class="q"><blockquote type="cite"><div style="margin: 0px;">How do we know that the most likely scenario for
</div><div style="margin: 0px;">the Singularity is that when the AI boots up, it</div><div style="margin: 0px;">doesn't take a look around, decides we are not worth</div><div style="margin: 0px;">the effort to save, and decides to write angst-ridden
</div><div style="margin: 0px;">haiku and solve crossword puzzles all day. <br></div></blockquote><div><br></div></span><div>I doubt very much that angst or crossword puzzles are part of its goal structure.</div></div></blockquote>
<div><br>Depends. If it is intelligent enough to compute the final outcomes for "life" (which is a distinct possibility) then haiku & crossword puzzles may not be such a bad way to burn time. Certainly no better or worse than ensuring the survival of, or information content of, states of matter which are recognized as being inherently suboptimal or whose future paths can easily be predicted.
<br></div><br>Robert</div>