[extropy-chat] Fundamental limits on the growth rate ofsuperintelligences

Robert Bradbury robert.bradbury at gmail.com
Tue Feb 14 17:59:29 UTC 2006


On 2/14/06, kevinfreels.com <kevin at kevinfreels.com> wrote:
>
> You are asking about fundamental limits on the rate at which a super AI
> becomes
> smarter and I am wondering how anyone could answer that question.


Not really.  I was dealing with the fundamental limits of the physics of the
hardware upon which the AI operates.  I can't do much computing with a
single atom (store information maybe, but not much computation).  Since the
AI has to sooner or later erase bits it is going to generate heat.  Failure
to remove the heat melts the hardware.  The requirement for thinking within
the heat removal capacity limits the thought capacity (and presumably the
intelligence) that the hardware can support.  The same type of reasoning
applies to the energy which is required to support faster computations.
(You can compute using the latent heat extracted from the environment but it
is going to be a very slow computation.)

Compare it to human intelligence and its instantiation (brain + body).  Cut
the brain off from the body (the radiator) and supply it with all the
glucose it needs and it will probably cook itself.  Cut it off from the
glucose supply and it can't do much at all.  Cut selected sets of neurons
between different functional parts of the brain and you should see the
"intelligence" slowly melt away.

There are fundamental limits as to how much "intelligence" you can get out
of specific numbers of photons, electrons, atoms, joules, radiator surface
area, etc.

I think John is trying to make the case that the AI is going to sneak up on
us and suddenly manifest itself as the overlord or that each country is
going to try to build its own superintelligence.  What I was attempting to
point out is that we don't need to allow that to happen.  A Playstation 5 or
6 is probably going to have the computational capacity to enable more than
human level intelligence (though I doubt the computational architecture will
facilitate that).  One can however always unplug them if they get out of
line.

Its obviously relatively easy for other countries to detect situations where
some crazy person (country) is engaging in unmonitored superintelligence
development.  Anytime they start constructing power generating capacity
significantly in excess of what the people are apparently consuming and/or
start constructing cooling towers for not only a reactor but a reactor + all
of the electricity it produces then it will be obvious what is going on and
steps can be taken to deal with the situation.

The point is that these things don't happen overnight.  The slow growth
scenario involving parasitic sucking off of CPU cycles is of concern as is
allowing ourselves to become overly dependent on highly interconnected
networks which do not allow human oversight for things like software
"upgrades".  [Though I will admit we are getting close to that now.  I have
*not* reviewed every line of source code in the many many megabytes of
software I've installed over the last couple of months (two Linux installs
and associated packages).  Its only because the hardware isn't fast enough
yet to support an AI that I'm not too worried about it.  But that day is
coming.]

Robert
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060214/fb515e80/attachment.html>


More information about the extropy-chat mailing list