<br><br>
<div><span class="gmail_quote">On 2/10/06, <b class="gmail_sendername">Robert Bradbury</b> <<a href="mailto:robert.bradbury@gmail.com">robert.bradbury@gmail.com</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">Some of the recent discussions I have noticed seem to fail to take into account limits on the rate of growth and logical plateaus of superintelligences. I have written papers about the ultimate limits,
e.g. [1] but want to point out some of the things which will constrain the rate of growth to a series of steps. <br><br>We do *not* wake up some morning and have a "friendly" AI running the solar system (even if various groups do manage to design something which could eventually manage this). Computational capacity requires at least 3 things. These are energy inputs, waste heat disposal, and mass, usually in the form of certain essential elements. If limits are placed on any of those then the rate of development of an intelligence and its ultimate thought capacities will be limited as well.
<br><br>Even if a self-evolving AI were to develop it would still be constrained by my ability to pull its plug out of the wall. If it is distributed via a cable or satellite network we can still disconnect the cables or take out the antennas. Alternatively terrorist acts against the necessary cooling towers or the mines that produce the essential materials a growing potentially deceitfully "friendly" AI would be quite effective in limiting computational capacity. An additional method for growth limitation is to constrain either the self-adaptive and/or manufacturing capability of an AI, even with programmable chips (FPGA) the computer architectures are limited by the underlying hardware with regard to speed of operation, # of calculations that can be performed within a specific time, etc. So long as an AI lacks the ability to create and integrate into its architecture alternative (presumably improved) hardware or simply more of the same its growth rate is constrained. [Those familiar with the "broadcast architecture" for nanotechnology manufacturing might see this as a complementary aspect -- an AI could come up with a better architecture for itself but without the means to implement it and transfer itself to such an implementation it does little good.]
<br><br>The only way it would appear things could get out of hand is a stealth "grey goo" scenario (where the growth of the AI substrate is hidden from us). But as the Freitas Ecophagy paper points out there are ways to be aware of whether this could be taking place under our noses.
<br><br>So before everyone runs off doing a lot of speculation about what a world of coexisting humans and "friendly" AIs might look like it is worth taking a serious look at whether humans would allow themselves to be placed what might become a strategically difficult position by allowing unmanaged growth of or infiltration of its computational substrate by AIs.
<br><br>Another way of looking at this is that humans may only allow the Singularity to happen at a rate at which they can adapt. At some point Ray's curves may hit a wall. Not because the technology is limiting but because we choose to limit the rate of change.
<br><br>Robert<br><br>1. "Life at the Limits of Physical Laws", SPIE 4273-32 (Jan 2001).<br><a onclick="return top.js.OpenExtLink(window,event,this)" href="http://www.aeiveos.com:8080/~bradbury/MatrioshkaBrains/OSETI3/4273-32.html" target="_blank">
http://www.aeiveos.com:8080/~bradbury/MatrioshkaBrains/OSETI3/4273-32.html</a></blockquote>
<div> </div>
<div>Seems that is overly optimistic.</div>
<div>First, unplugging an AI may mean unplugging the Net</div>
<div>Second, a superintelligence may only need to be (say) 10000x as intelligent as a normal Human - say, about the size of a small car.</div>
<div>Third, all the AI has to do is promise one group vast riches/power etc in return for a small favour...</div>
<div> </div>
<div>Dirk</div><br> </div>