On Fri, Jun 13, 2008 at 6:24 AM, Natasha Vita-More <natasha@natasha.cc> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div link="blue" vlink="purple" lang="EN-US">
<div><font color="black" face="Arial" size="2"><span style="font-size: 10pt; color: black;"><br>Further that
"humans as they are known today are no longer significant in shaping the
world" is assumptive based on single-track thinking that SI will occur
outside of human and that humans will not be a part of the SI advance. </span></font></div></div></blockquote><div><br>Natasha, the possibility of a hard-takeoff AI operating largely independent of humans is a real possibility, not single-track thinking. <br>
<br>If someone created a smarter-than-human AI, it might be easier for the AI to initially improve upon itself recursively, rather than enhance the intelligence of other humans. This would be for various reasons: it would have complete access to its own source code, its cognitive elements would be operating much faster, it could extend itself onto adjacent hardware, etc.<br>
<br>Even a friendly superintelligent AI might decide that it's easiest to help humans by improving itself quickly, then actually offering help only after it has reached an extremely high level of capability.<br><br>Are you familiar with the general arguments for hard takeoff AI? Quite a few are found here: <a href="http://www.singinst.org/upload/LOGI//seedAI.html">http://www.singinst.org/upload/LOGI//seedAI.html</a>. If you assign a hard takeoff a very low probability, then I would at least figure that you've read the arguments in favor of the possibility and have refutations of them. <font color="black" face="Arial" size="2"><span style="font-size: 10pt; color: black;"> </span></font>
<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div link="blue" vlink="purple" lang="EN-US"><div><p><font color="black" face="Arial" size="2"><span style="font-size: 10pt; color: black;">My earnest approach to the future is the
combined effort of the human brain and technological innovation in enhancing
human to merge with SI through stages of development and not one big event that
occurs overnight. </span></font></p></div></div></blockquote><div>This may be your preference, but it may turn out to simply be technologically easier to create a self-improving AI first. Sort of like how I might like to say that I'd prefer for fossil fuels to be replaced by solar power, but replacing them with nuclear seems far simpler technologically.</div>
</div><br>-- <br>Michael Anissimov