[ExI] Limiting factors of intelligence explosion speeds

Richard Loosemore rpwl at lightlink.com
Thu Jan 20 19:27:36 UTC 2011


Anders Sandberg wrote:
> One of the things that struck me during our Winter Intelligence 
> workshop on intelligence explosions was how confident some people 
> were about the speed of recursive self-improvement of AIs, brain 
> emulation collectivies or economies. Some thought it was going to be 
> fast in comparision to societal adaptation and development timescales
>  (creating a winner takes all situation), some thought it would be 
> slow enough for multiple superintelligent agents to emerge. This 
> issue is at the root of many key questions about the singularity (one
>  superintelligence or many? how much does friendliness matter?)
> 
> It would be interesting to hear this list's take on it: what do you 
> think is the key limiting factor for how fast intelligence can 
> amplify itself?
> 
> Some factors that have been mentioned in past discussions:

Before we get to the specific factors you list, some general points.

A)  Although we can try our best to understand how an intelligence
explosion might happen, the truth is that there are too many 
interactions between the factors for any kind of reliable conclusion to 
be reached. This is a complex-system interaction in which even the 
tiniest, least-anticipated factor may turn out to be the rate-limiting 
step (or, conversely, the spark that starts the fire).

B)  There are two types of answer that can be given.  One is based on
quite general considerations.  The second has to be based on what I, as
an AGI researcher, believe I understand about the way in which AGI will
be developed.  I will keep back the second one for the end, so people 
can kick that one to the ground as a separate matter.

C)  There is one absolute prerequisite for an intelligence explosion,
and that is that an AGI becomes smart enough to understand its own
design.  If it can't do that, there is no explosion, just growth as 
usual.  I do not believe it makes sense to talk about what happens 
*before* that point as part of the "intelligence explosion".

D)  When such a self-understanding system is built, it is unlikely that
it will be the creation of a lone inventor who does it in their shed at
the bottom of the garden, without telling anyone.  Very few of the "lone
inventor" scenarios (the Bruce Wayne scenarios) are plausible.

E)  Most importantly, the invention of a human-level, self-understanding
AGI would not lead to a *subsequent* period (we can call it the
"explosion period") in which the invention just sits on a shelf with
nobody bothering to pick it up.  A situation in which it is just one
quiet invention alongside thousands of others, unrecognized and not
generally believed.

F)  When the first human-level AGI is developed, it will either require
a supercomputer-level of hardware resources, or it will be achievable
with much less.  This is significant, because world-class supercomputer
hardware is not something that can quickly be duplicated on a large
scale.  We could make perhaps hundreds of such machines, with a massive
effort, but probably not a million of them in a couple of years.

G)  There are two types of intelligence speedup:  one due to faster
operation of an intelligent system (clock speed) and one due to
improvment in the type of mechanisms that implement the thought
processes.  Obviously both could occur at once, but the latter is far
more difficult to achieve, and may be subject to fundamental limits that
we do not understand.  Speeding up the hardware, on the other hand, has
been going on for a long time and is more mundane and reliable.  Notice
that both routes lead to greater "intelligence", because even a human
level of thinking and creativity would be more effective if it were
happening (say) a thousand times faster than it does now.


*********************************************

Now the specific factors you list.

1) Economic growth rate

One consequence of the above reasoning is that economic growth rate
would be irrelevant.  If an AGI were that smart, it would already be
obvious to many that this was a critically important technology, and no
effort would be spared to improve the AGI "before the other side does". 
Entire national economies would be sublimated to the goal of developing 
the first superintelligent machine.

In fact, economic growth rate would be *defined* by the intelligence
explosion projects taking place around the world.


2) Investment availability

The above reasoning also applies to this case.  Investment would be 
irrelevant because the players would either be governments or frenzied
bubble-investors, and they would be pumping it in as fast as money could 
be printed.


3) Gathering of empirical information (experimentation, interacting with
an environment).

So, this is about the fact that the AGI would need to do some 
experimentation and interaction with the environment.  For example, if 
it wanted to reimplement itself on faster hardware (the quickest route 
to an intelligence increase) it would probably have to set up its own 
hardware research laboratory and gather new scientific data by doing 
experiments, some of which would go at their own speed.

The question is:  how much of the research can be sped up by throwing
large amounts of intelligence at it?  This is the parallel-vs-serial
problem (i.e. you can't make a baby nine times quicker by asking nine 
women to be pregnant for one month).

This is not a factor that I believe we can understand very well ahead of
time, because some experiments that look as though they require
fundamentally slow physical processes -- like waiting for a silicon
crystal to grow, so we can study a chip fabrication mechanism -- may
actually be dependent on smartness, in ways that we cannot anticipate.
It could be that instead of waiting for the chips to grow at their own
speed, the AGI can do clever micro-experiments that give the same
information faster.

This factor invites unbridled speculation and opinion, to such an extent
that there are more opinions than facts.  However, we can make one
observation that cuts through the arguments.  Of all the factors that 
determine how fast empirical scientific research can be carried out, we 
know that intelligence and thinking speed of the scientist themselves 
*must* be one of the most important, today.  It seems likely that in our 
present state of technological sophistication, advanced research 
projects are limited by the availability and cost of intelligent and 
experienced scientists.

But if research labs around the world have stopped throwing *more*
scientists at problems they want to solve, because the latter cannot be
had, or are too expensive, would it be likely that the same research
labs ar *also*, quite independently, at the limit for the physical rate
at which experiments can be carried out?  It seems very unlikely that 
both of these limits have been reached at the same time, because they 
cannot be independently maximized.  (This is consistent with anecdotal 
reports:  companies complain that research staff cost a lot, and that 
scientists are in short supply:  they don't complain that nature is just 
too slow).

In that case, we should expect that any experiment-speed limits lie up
the road, out of sight.  We have not reached them yet.

So, for that reason, we cannot speculate about exactly where those
limits are.  (And, to reiterate:  we are talking about the limits that
hit us when we can no longer do an end-run around slow experiments by
using our wits to invent different, quicker experiments that give the
same information).

Overall, I think that we do not have concrete reasons to believe that
this will be a fundamental limit that stops the intelligence explosion
from taking an AGI from H to (say) 1,000 H.  Increases in speed within
that range (for computer hardware, for example) are already expected,
even without large numbers of AGI systems helping out, so it would seem 
to me that physical limits, by themselves, would not stop an explosion 
that went from I = H to I = 1,000 H.


4)  Software complexity

By this I assume you mean the complexity of the software that an AGI
must develop in order to explode its intelligence.  The premise is
that even an AGI with self-knowledge finds it hard to cope with the
fabulous complexity of the problem of improving its own software.

This seems implausible as a limiting factor, because the AGI could
always leave the software alone and develop faster hardware.  So long as
the AGI can find a substrate that gives it (say) 1,000 H thinking-speed,
we have the possibility for a significant intelligence explosion.

Arguing that software complexity will stop the initial human level AGI 
from being built is a different matter.  It may stop an intelligence 
explosion from happening by stopping the precursor events, but I take 
that to be a different type of question.


5)  Hardware demands vs. available hardvare

I have already mentioned, above, that a lot depends on whether the first 
AGI requires a large (world-class) supercomputer, or whether it can be 
done on something much smaller.

This may limit the initial speed of the explosion, because one of the 
critical factors would be the sheer number of copies of the AGI that can 
be created.  Why is this a critical factor?  Because the ability to copy 
the intelligence of a fully developed, experienced AGI is one of the big 
new factors that makes the intelligence explosion what it is:  you 
cannot do this for humans, so human geniuses have to be rebuilt from 
scratch every generation.

So, the initial requirement that an AGI be a supercomputer would make it 
hard to replicate the AGI on a huge scale, because the replication rate 
would (mostly) determine the intelligence-production rate.

However, as time went on, the rate of replication would grow, as 
hardware costs went down at their usual rate.  This would mean that the 
*rate* of arrival of high-grade intelligence would increase in the years 
following the start of this process.  That intelligence would then be 
used to improve the design of the AGIs (at the very least, increasing 
the rate of new-and-faster-hardware production), which would have a 
positive feedback effect on the intelligence production rate.

So I would see a large-hardware requirement for the first AGI as 
something that would dampen the initial stages of the explosion.  But 
the positive feedback after that would eventually lead to an explosion 
anyway.

If, on the other hand, the initial hardware requirements are modest (as 
they very well could be), the explosion would come out of the gate at 
full speed.




6)  Bandwidth

Alongside the aforementioned replication of adult AGIs, which would 
allow the multiplication of knowledge in ways not currently available in 
humans, there is also the fact that AGIs could communicate with one 
another using high-bandwidth channels.  This inter-AGI bandwidth.

As a separate issue, there might be bandwidth limits inside an AGI, 
which might make it difficult to augment the intelligence of a single 
system.  This is intra-AGI bandwidth.

The first one - inter-AGI bandwidth - is probably less of an issue for 
the intelligence explosion, because there are so many research issues 
that can be split into separably-addressible components, that I doubt we 
would find AGIs sitting around with no work to do on the intelligence 
amplification project, on account of waiting for other AGIs to get a 
free channel to talk to them.

Intra-AGI bandwidth is another matter entirely.  There could be 
limitations on the IQ of an AGI -- for example if working memory 
limitations (the magic number seven, plus or minus two) turned out to be 
caused by connectivity/bandwidth limits within the system.

However, notice that such factors may not inhibit the initial phase of 
an explosion, because the clock speed, not IQ, of the AGI may be 
improvable by several orders of magnitude before bandwidth limits kick 
in.  The reasoning behind this is the observation that neural signal 
speed is so slow.  If a brain-like system (not necessarily a whole brain 
emulation, but just something that replicated the high-level 
functionality) could be built using components that kept the same type 
of processing demands, and the same signal speed.  In that kind of 
system there would then be plenty of room to develop faster signal 
speeds and increase the intelligence of the system.

Overall, this is, I believe, the factor that is most likely to cause 
trouble.  However, much research is needed before much can be said with 
certainty.

Most importantly, this depends on *exactly* what type of AGI is being 
built.  Making naive assumptions about the design can lead to false 
conclusions.



7)  Lightspeed lags

This is not much different than bandwidth limits, in terms of the effect 
it has.  It would be a significant problem if the components of the 
machine were physically so far apart that massive amounts of data (by 
assumption) were delivered with a significant delay.

By itself, again, this seems unlikely to be a problem in the initial few 
orders of magnitude of the explosion.  Again, the argument derives from 
what we know about the brain.  We know that the brain's hardware was 
chosen due to biochemical constraints.  We are carbon-based, not 
silicon-and-copper-based, so, no chips in the head, only pipes filled 
with fluid and slow molecular gates in the walls of the pipes.  But if 
nature used the pipes-and-ion-channels approach, there seems to be 
plenty of scope for speedup with a transition to silicon and copper (and 
never mind all the other more exotic computing substrates on the 
horizon).  If that transition produced a 1,000x speedup, this would be 
an explosion worthy of the name.

The only reason this might not happen would be if, for some reason, the 
brain is limited on two fronts simultaneously:  both by the carbon 
implementation and by the fact that bigger brains cause disruptive 
light-speed delays.  Or, that all non-carbon-implementation of the brain 
take us up close to the lightspeed limit before we get much of a speedup 
over the brain.  Neither of these ideas seem plausible.  In fact, they 
both seem to me to require a coincidence of limiting factors (two 
limiting factors just happening to kick in at exactly the same level), 
which I find deeply implausible.


*****************

Finally, some comments about approaches to AGI that would affect the 
answer to this question about the limiting factors for an intelligence 
explosion.

I have argued consistently, over the last several years, that AI 
research has boxed itself into a corner due to a philosophical 
commitment to the power of formal systems.  Since I first started 
arguing this case, Nassim Nicholas Taleb (The Black Swan) coined the 
term "Ludic Fallacy" to describe a general form of exactly the issue I 
have been describing.

I have framed this in the context of something that I called the 
"complex systems problem", the details of which are not important here, 
although the conclusion is highly relevant.

If the complex systems problem is real, then there is a very large class 
of AGI system designs that are (a) almost completely ignored at the 
moment, and (b) very likely to contain true intelligent systems, and (c) 
quite possibly implementable on relatively modest hardware.  This class 
of systems is being ignored for sociology-of-science reasons (the 
current generation of AI researchers would have to abandon their deepest 
loves to be able to embrace such systems, and since they are fallible 
humans, rather than objectively perfect scientists, this is anathema).

So, my most general answer to this question about the rate of the 
intelligence explosion is that, in fact, it depends crucially on the 
kind of AGI systems being considered.  If the scope is restricted to the 
current approaches, we might never actually reach human level 
intelligence, and the questio is moot.

But if this other class of (complex) AGI systems did start being built, 
we might find that the hardware requirements were relatively modest 
(much less than supercomputer size), and the software complexity would 
also not be that great.  As far as I can see, most of the 
above-mentioned limitations would not be significant within the first 
few orders of magnitude of increase.  And, the beginning of the slope 
could be in the relatively near future, rather than decades away.

But that, as usual, is just the opinion of an AGI researcher.  No need 
to take *that* into account in assessing the factors.  ;-)



Richard Loosemore

Mathematical and Physical Sciences,
Wells College
Aurora, NY 13026
USA








More information about the extropy-chat mailing list