[extropy-chat] Fundamental limits on the growth rate of superintelligences

Dirk Bruere dirk.bruere at gmail.com
Mon Feb 13 15:59:24 UTC 2006


On 2/10/06, Robert Bradbury <robert.bradbury at gmail.com> wrote:
>
> Some of the recent discussions I have noticed seem to fail to take into
> account limits on the rate of growth and logical plateaus of
> superintelligences.  I have written papers about the ultimate limits, e.g.
> [1] but want to point out some of the things which will constrain the rate
> of growth to a series of steps.
>
> We do *not* wake up some morning and have a "friendly" AI running the
> solar system (even if various groups do manage to design something which
> could eventually manage this).  Computational capacity requires at least 3
> things.  These are energy inputs, waste heat disposal, and mass, usually in
> the form of certain essential elements.  If limits are placed on any of
> those then the rate of development of an intelligence and its ultimate
> thought capacities will be limited as well.
>
> Even if a self-evolving AI were to develop it would still be constrained
> by my ability to pull its plug out of the wall.  If it is distributed via a
> cable or satellite network we can still disconnect the cables or take out
> the antennas.  Alternatively terrorist acts against the necessary cooling
> towers or the mines that produce the essential materials a growing
> potentially deceitfully "friendly" AI would be quite effective in limiting
> computational capacity.  An additional method for growth limitation is to
> constrain either the self-adaptive and/or manufacturing capability of an AI,
> even with programmable chips (FPGA) the computer architectures are limited
> by the underlying hardware with regard to speed of operation, # of
> calculations that can be performed within a specific time, etc.  So long as
> an AI lacks the ability to create and integrate into its architecture
> alternative (presumably improved) hardware or simply more of the same its
> growth rate is constrained.  [Those familiar with the "broadcast
> architecture" for nanotechnology manufacturing might see this as a
> complementary aspect -- an AI could come up with a better architecture for
> itself but without the means to implement it and transfer itself to such an
> implementation it does little good.]
>
> The only way it would appear things could get out of hand is a stealth
> "grey goo" scenario (where the growth of the AI substrate is hidden from
> us).  But as the Freitas Ecophagy paper points out there are ways to be
> aware of whether this could be taking place under our noses.
>
> So before everyone runs off doing a lot of speculation about what a world
> of coexisting humans and "friendly" AIs might look like it is worth taking a
> serious look at whether humans would allow themselves to be placed what
> might become a strategically difficult position by allowing unmanaged growth
> of or infiltration of its computational substrate by AIs.
>
> Another way of looking at this is that humans may only allow the
> Singularity to happen at a rate at which they can adapt.  At some point
> Ray's curves may hit a wall.  Not because the technology is limiting but
> because we choose to limit the rate of change.
>
> Robert
>
> 1. "Life at the Limits of Physical Laws", SPIE 4273-32 (Jan 2001).
> http://www.aeiveos.com:8080/~bradbury/MatrioshkaBrains/OSETI3/4273-32.html


Seems that is overly optimistic.
First, unplugging an AI may mean unplugging the Net
Second, a superintelligence may only need to be (say) 10000x as intelligent
as a normal Human - say, about the size of a small car.
Third, all the AI has to do is promise one group vast riches/power etc in
return for a small favour...

Dirk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060213/6fd05d9c/attachment.html>


More information about the extropy-chat mailing list