Hi Jef,<br><br><div class="gmail_quote">On Sun, Nov 14, 2010 at 11:26 AM, Aware <span dir="ltr"><<a href="mailto:aware@awareresearch.com">aware@awareresearch.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im"><br></div>
The much more significant and accelerating risk is not that of a<br>
"recursively self-improving" seed AI going rogue and tiling the galaxy<br>
with paper clips or copies of itself, but of relatively small groups<br>
of people, exploiting technology (AI and otherwise) disproportionate<br>
to their context of values.<br></blockquote><div><br></div><div>I disagree about the relative risk, but I'm worried about this too.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
The need is not for a singleton nanny-AI but for development of a<br>
fractally organized synergistic framework for increasing awareness of<br>
our present but evolving values, and our increasingly effective means<br>
for their promotion, beyond the capabilities of any individual<br>
biological or machine intelligence.<br></blockquote><div><br></div><div>Go ahead and build one, I'm not stopping you. </div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
It might be instructive to consider that a machine intelligence<br>
certainly can and will outperform the biological kludge, but<br>
MEANINGFUL intelligence improvement entails adaptation to a relatively<br>
more complex environment. This implies that an AI (much more likely a<br>
human-AI symbiont), poses a considerable threat in present terms, with<br>
acquisition of knowledge up to and integrating between existing silos<br>
of knowledge, but lacking relevant selection pressure it is unlikely<br>
to produce meaningful growth and will expend nearly all its<br>
computation exploring irrelevant volumes of possibility space.<br></blockquote><div><br></div><div>I'm having trouble parsing this. Isn't it our job to provide that "selection pressure" (the term is usually used in Darwinian population genetics so I find it slightly odd to see it used in this context)?</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">Singularitarians would do well to consider more ecological models in<br>
this Red Queen's race.</blockquote><div><br></div><div>On a more sophisticated level I do see it as such. Instead of organisms being the relevant unit of analysis, I see mindstuff-environment interactions as being the relevant level. AI will undergo a hard takeoff not be cooperating with the existing ecological context, but by mass-producing its own mindstuff until the agent itself constitutes an entire ecology. The end result is more closely analogous to an alien planet's ecology colliding with our own than a new species arising within the current ecology. </div>
</div><br>-- <br><a href="mailto:michael.anissimov@singinst.org" target="_blank">michael.anissimov@singinst.org</a><br><span style="font-family:arial, sans-serif;font-size:13px;border-collapse:collapse"><div>Singularity Institute<br>
</div><div>Media Director</div></span><br>