[ExI] Limiting factors of intelligence explosion speeds

Eugen Leitl eugen at leitl.org
Thu Jan 20 21:31:50 UTC 2011


On Thu, Jan 20, 2011 at 02:27:36PM -0500, Richard Loosemore wrote:

> C)  There is one absolute prerequisite for an intelligence explosion,
> and that is that an AGI becomes smart enough to understand its own
> design.  If it can't do that, there is no explosion, just growth as  

Unnecessary for darwinian systems. The process is dumb as dirt,
but it's working quite well.

> usual.  I do not believe it makes sense to talk about what happens  

If you define the fitness function, and have ~ms generation
turaround it's not quite as usual anymore.

> *before* that point as part of the "intelligence explosion".
>
> D)  When such a self-understanding system is built, it is unlikely that

I don't think that a self-understanding system is at all possible.
Or, rather, it would perform better than a blind optimization.

> it will be the creation of a lone inventor who does it in their shed at
> the bottom of the garden, without telling anyone.  Very few of the "lone
> inventor" scenarios (the Bruce Wayne scenarios) are plausible.

I agree it's probably a large scale effort, initially.

> E)  Most importantly, the invention of a human-level, self-understanding

I wonder where the self-understanding meme is coming from. It's 
certainly pervasive enough.

> AGI would not lead to a *subsequent* period (we can call it the
> "explosion period") in which the invention just sits on a shelf with
> nobody bothering to pick it up.  A situation in which it is just one
> quiet invention alongside thousands of others, unrecognized and not
> generally believed.
>
> F)  When the first human-level AGI is developed, it will either require
> a supercomputer-level of hardware resources, or it will be achievable

Bootstrap takes many orders of mangnitude more resources than required
for operation. Even before optimization happens.

> with much less.  This is significant, because world-class supercomputer
> hardware is not something that can quickly be duplicated on a large
> scale.  We could make perhaps hundreds of such machines, with a massive

About 30 years from now, TBit/s photonic networking is the norm.
The separation between core and edge is gone, and inshallah, so
well policy enforcement. Every city block is a supercomputer, then.

> effort, but probably not a million of them in a couple of years.

There are a lot of very large datacenters with excellent network
cross-section, even if you disregard large screen TVs and game 
consoles on >GBit/s residential networks.

> G)  There are two types of intelligence speedup:  one due to faster
> operation of an intelligent system (clock speed) and one due to

Clocks don't scale, eventually you'll settle for local asynchronous,
with large-scale loosely coupled oscillators synchronizing.

> improvment in the type of mechanisms that implement the thought
> processes.  Obviously both could occur at once, but the latter is far

How much random biochemistry tweaking would improve dramatically
on the current CNS performance? As a good guess, none. So once you've
reimplemented the near-optimal substrate, dramatic improvements
are over. This isn't software, this is direct implmenentation of
neural computational substrate in as thin hardware layer this
universe allows us.

> more difficult to achieve, and may be subject to fundamental limits that
> we do not understand.  Speeding up the hardware, on the other hand, has

I disagree, the limits are that of computational physics, and these are
fundamentally simple.

> been going on for a long time and is more mundane and reliable.  Notice
> that both routes lead to greater "intelligence", because even a human
> level of thinking and creativity would be more effective if it were
> happening (say) a thousand times faster than it does now.

Run a dog for a gigayear, still no general relativity.

>
> *********************************************
>
> Now the specific factors you list.
>
> 1) Economic growth rate
>
> One consequence of the above reasoning is that economic growth rate
> would be irrelevant.  If an AGI were that smart, it would already be

Any technology allowing you to keep a mind in a box will allow you
to make a pretty good general assembler. The limits of such technology
are energy and matter fluxes. Buying and shipping widgets is only a
constraining factor in the physical layer bootstrap (if at all necessary,
30 years hence all-purpose fabrication has a pretty small footprint).

> obvious to many that this was a critically important technology, and no
> effort would be spared to improve the AGI "before the other side does".  
> Entire national economies would be sublimated to the goal of developing  
> the first superintelligent machine.

This would be fun to watch.

> In fact, economic growth rate would be *defined* by the intelligence
> explosion projects taking place around the world.
>
>
> 2) Investment availability
>
> The above reasoning also applies to this case.  Investment would be  
> irrelevant because the players would either be governments or frenzied
> bubble-investors, and they would be pumping it in as fast as money could  
> be printed.
>
>
> 3) Gathering of empirical information (experimentation, interacting with
> an environment).
>
> So, this is about the fact that the AGI would need to do some  
> experimentation and interaction with the environment.  For example, if  

If you have enough crunch to run a mind, you have enough crunch to
run really really really good really fast models of the universe.

> it wanted to reimplement itself on faster hardware (the quickest route  
> to an intelligence increase) it would probably have to set up its own  
> hardware research laboratory and gather new scientific data by doing  
> experiments, some of which would go at their own speed.

You're thinking like a human.

> The question is:  how much of the research can be sped up by throwing
> large amounts of intelligence at it?  This is the parallel-vs-serial
> problem (i.e. you can't make a baby nine times quicker by asking nine  
> women to be pregnant for one month).

It's a good question. I have a hunch (no proof, nothing) that the
current way of doing reality modelling is extremely inefficient.
Currently, experimenters have every reason to sneer at modelers.
Currently.

> This is not a factor that I believe we can understand very well ahead of
> time, because some experiments that look as though they require
> fundamentally slow physical processes -- like waiting for a silicon
> crystal to grow, so we can study a chip fabrication mechanism -- may
> actually be dependent on smartness, in ways that we cannot anticipate.
> It could be that instead of waiting for the chips to grow at their own
> speed, the AGI can do clever micro-experiments that give the same
> information faster.

Any intelligence worth its salt would see that it would use computational
chemistry to bootstrap molecular manufacturing. The grapes could be hanging
pretty low there.

> This factor invites unbridled speculation and opinion, to such an extent
> that there are more opinions than facts.  However, we can make one
> observation that cuts through the arguments.  Of all the factors that  
> determine how fast empirical scientific research can be carried out, we  
> know that intelligence and thinking speed of the scientist themselves  
> *must* be one of the most important, today.  It seems likely that in our  
> present state of technological sophistication, advanced research  
> projects are limited by the availability and cost of intelligent and  
> experienced scientists.

You can also vastly speed up the rate of prototyping by scaling down
and proper tooling. You see first hints of that in lab automation,
particularly microfluidics. Add ability to fork off dedicated investigators
at the drop of a hat, and things start happening, and in a positive-feedback
loop.

> But if research labs around the world have stopped throwing *more*
> scientists at problems they want to solve, because the latter cannot be
> had, or are too expensive, would it be likely that the same research
> labs ar *also*, quite independently, at the limit for the physical rate
> at which experiments can be carried out?  It seems very unlikely that  
> both of these limits have been reached at the same time, because they  
> cannot be independently maximized.  (This is consistent with anecdotal  
> reports:  companies complain that research staff cost a lot, and that  
> scientists are in short supply:  they don't complain that nature is just  
> too slow).

Most monkeys rarely complain that they're monkeys. (Resident monkeys
excluded, of course).

> In that case, we should expect that any experiment-speed limits lie up
> the road, out of sight.  We have not reached them yet.

I, a mere monkey, can easily imagine two orders of magnitude speed
improvements. Which, of course, result in a positive autofeedback loop.

> So, for that reason, we cannot speculate about exactly where those
> limits are.  (And, to reiterate:  we are talking about the limits that
> hit us when we can no longer do an end-run around slow experiments by

I do not think you will need slow experiments. Not slow by our standards,
at least.

> using our wits to invent different, quicker experiments that give the
> same information).
>
> Overall, I think that we do not have concrete reasons to believe that
> this will be a fundamental limit that stops the intelligence explosion
> from taking an AGI from H to (say) 1,000 H.  Increases in speed within
> that range (for computer hardware, for example) are already expected,
> even without large numbers of AGI systems helping out, so it would seem  
> to me that physical limits, by themselves, would not stop an explosion  
> that went from I = H to I = 1,000 H.

Speed limits (assuming classical computation) do not begin to take hold
before 10^6, and maybe even 10^9 (this is more difficult, and I do not
have a good model of wetware at 10^9 speedup to current wallclock).

>
> 4)  Software complexity
>
> By this I assume you mean the complexity of the software that an AGI
> must develop in order to explode its intelligence.  The premise is
> that even an AGI with self-knowledge finds it hard to cope with the
> fabulous complexity of the problem of improving its own software.

Software, that's pretty steampunk of you.

> This seems implausible as a limiting factor, because the AGI could
> always leave the software alone and develop faster hardware.  So long as

There is no difference between hardware and software (state) as far
as advanced cognition is concerned. Once you've covered the easy
gains in first giant co-evolution steps further increases are much
more modest, and much more expensive.

> the AGI can find a substrate that gives it (say) 1,000 H thinking-speed,

We should be able to do 10^3 with current technology.

> we have the possibility for a significant intelligence explosion.

Yeah, verily.

> Arguing that software complexity will stop the initial human level AGI  

If it hurts, stop doing it.

> from being built is a different matter.  It may stop an intelligence  
> explosion from happening by stopping the precursor events, but I take  
> that to be a different type of question.
>
>
> 5)  Hardware demands vs. available hardvare
>
> I have already mentioned, above, that a lot depends on whether the first  
> AGI requires a large (world-class) supercomputer, or whether it can be  
> done on something much smaller.

Current supercomputers are basically consumer devices or embeddeds on 
steroids, networked on a large scale.

> This may limit the initial speed of the explosion, because one of the  
> critical factors would be the sheer number of copies of the AGI that can  

Unless the next 30 years won't see the same development as the last ones,
then substrate is the least of your worries.

> be created.  Why is this a critical factor?  Because the ability to copy  
> the intelligence of a fully developed, experienced AGI is one of the big  
> new factors that makes the intelligence explosion what it is:  you  
> cannot do this for humans, so human geniuses have to be rebuilt from  
> scratch every generation.
>
> So, the initial requirement that an AGI be a supercomputer would make it  
> hard to replicate the AGI on a huge scale, because the replication rate  
> would (mostly) determine the intelligence-production rate.

Nope.

> However, as time went on, the rate of replication would grow, as  

Look, even now we know what we would need, but you can't buy it. But 
you can design it, and two weeks from now you'll get your first prototypes.
That's today, 30 years the prototypes might be hours away.

And do you need prototypes to produce a minor variation on a stock
design? Probably not.

> hardware costs went down at their usual rate.  This would mean that the  
> *rate* of arrival of high-grade intelligence would increase in the years  
> following the start of this process.  That intelligence would then be  
> used to improve the design of the AGIs (at the very least, increasing  
> the rate of new-and-faster-hardware production), which would have a  
> positive feedback effect on the intelligence production rate.
>
> So I would see a large-hardware requirement for the first AGI as  
> something that would dampen the initial stages of the explosion.  But  

Au contraire, this planet is made from swiss cheese. Annex at your leisure.

> the positive feedback after that would eventually lead to an explosion  
> anyway.
>
> If, on the other hand, the initial hardware requirements are modest (as  
> they very well could be), the explosion would come out of the gate at  
> full speed.
>
>
>
>
> 6)  Bandwidth
>
> Alongside the aforementioned replication of adult AGIs, which would  
> allow the multiplication of knowledge in ways not currently available in  
> humans, there is also the fact that AGIs could communicate with one  
> another using high-bandwidth channels.  This inter-AGI bandwidth.

Fiber is cheap. Current fiber comes in 40 or 100 GBit/s parcels.
30 years hence bandwidth will be probably adequate.

>
> As a separate issue, there might be bandwidth limits inside an AGI,  
> which might make it difficult to augment the intelligence of a single  
> system.  This is intra-AGI bandwidth.

Even now bandwidth growth is far in excess of computation growth.
Once you go embedded memory, you're more closely matched. But still
the volume/surface (you only have to communicate surface state)
ratio indicated the local communication is the bottleneck.

> The first one - inter-AGI bandwidth - is probably less of an issue for  
> the intelligence explosion, because there are so many research issues  
> that can be split into separably-addressible components, that I doubt we  
> would find AGIs sitting around with no work to do on the intelligence  
> amplification project, on account of waiting for other AGIs to get a  
> free channel to talk to them.

You're making it sound so planned, and orderly.

> Intra-AGI bandwidth is another matter entirely.  There could be  
> limitations on the IQ of an AGI -- for example if working memory  
> limitations (the magic number seven, plus or minus two) turned out to be  
> caused by connectivity/bandwidth limits within the system.

So many assumptions.

> However, notice that such factors may not inhibit the initial phase of  
> an explosion, because the clock speed, not IQ, of the AGI may be  

There is no clock, literally. Operations/volume, certainly.

> improvable by several orders of magnitude before bandwidth limits kick  
> in.  The reasoning behind this is the observation that neural signal  

Volume/surface ratio is on your side here.

> speed is so slow.  If a brain-like system (not necessarily a whole brain  
> emulation, but just something that replicated the high-level  
> functionality) could be built using components that kept the same type  
> of processing demands, and the same signal speed.  In that kind of  
> system there would then be plenty of room to develop faster signal  
> speeds and increase the intelligence of the system.
>
> Overall, this is, I believe, the factor that is most likely to cause  
> trouble.  However, much research is needed before much can be said with  
> certainty.
>
> Most importantly, this depends on *exactly* what type of AGI is being  
> built.  Making naive assumptions about the design can lead to false  
> conclusions.

Just think of it as a realtime simulation of a given 3d physical
process (higher dimensions are mapped to 3d, so they don't figure).
Suddenly things are simple.

>
>
> 7)  Lightspeed lags
>
> This is not much different than bandwidth limits, in terms of the effect  
> it has.  It would be a significant problem if the components of the  
> machine were physically so far apart that massive amounts of data (by  
> assumption) were delivered with a significant delay.

Vacuum or glass is a FIFO, and you don't have to wait for ACKs.
Just fire stuff bidirectionally, and deal with transmission errors
by graceful degradation.

> By itself, again, this seems unlikely to be a problem in the initial few  
> orders of magnitude of the explosion.  Again, the argument derives from  
> what we know about the brain.  We know that the brain's hardware was  
> chosen due to biochemical constraints.  We are carbon-based, not  
> silicon-and-copper-based, so, no chips in the head, only pipes filled  
> with fluid and slow molecular gates in the walls of the pipes.  But if  
> nature used the pipes-and-ion-channels approach, there seems to be  
> plenty of scope for speedup with a transition to silicon and copper (and  
> never mind all the other more exotic computing substrates on the  
> horizon).  If that transition produced a 1,000x speedup, this would be  
> an explosion worthy of the name.

Why so modest?

> The only reason this might not happen would be if, for some reason, the  
> brain is limited on two fronts simultaneously:  both by the carbon  
> implementation and by the fact that bigger brains cause disruptive  

The brain is a slow, noisy (but one using noise to its own advantage)
metabolically constrained system which burns most of its metabolism 
for homeostasis purposes. It doesn't take a genius to sketch the
obvious ways in which you can reimplement that design, taking advantages
and removing disadvantages.

> light-speed delays.  Or, that all non-carbon-implementation of the brain  
> take us up close to the lightspeed limit before we get much of a speedup  

We here work with ~120 m/s, not 120 Mm/s. Reduce feature size by
an order of magnitude or two, and switching times of ns and ps
instead of ms, and c is not that big a limitation anymore.

> over the brain.  Neither of these ideas seem plausible.  In fact, they  
> both seem to me to require a coincidence of limiting factors (two  
> limiting factors just happening to kick in at exactly the same level),  
> which I find deeply implausible.
>
>
> *****************
>
> Finally, some comments about approaches to AGI that would affect the  
> answer to this question about the limiting factors for an intelligence  
> explosion.
>
> I have argued consistently, over the last several years, that AI  
> research has boxed itself into a corner due to a philosophical  
> commitment to the power of formal systems.  Since I first started  

Very much so.

> arguing this case, Nassim Nicholas Taleb (The Black Swan) coined the  
> term "Ludic Fallacy" to describe a general form of exactly the issue I  
> have been describing.
>
> I have framed this in the context of something that I called the  
> "complex systems problem", the details of which are not important here,  
> although the conclusion is highly relevant.
>
> If the complex systems problem is real, then there is a very large class  
> of AGI system designs that are (a) almost completely ignored at the  
> moment, and (b) very likely to contain true intelligent systems, and (c)  
> quite possibly implementable on relatively modest hardware.  This class  

Define "relatively modest".

> of systems is being ignored for sociology-of-science reasons (the  
> current generation of AI researchers would have to abandon their deepest  
> loves to be able to embrace such systems, and since they are fallible  
> humans, rather than objectively perfect scientists, this is anathema).

Which is why blind optimization processes running on acres of
hardware will kick their furry little butts.

> So, my most general answer to this question about the rate of the  
> intelligence explosion is that, in fact, it depends crucially on the  
> kind of AGI systems being considered.  If the scope is restricted to the  
> current approaches, we might never actually reach human level  
> intelligence, and the questio is moot.
>
> But if this other class of (complex) AGI systems did start being built,  
> we might find that the hardware requirements were relatively modest  
> (much less than supercomputer size), and the software complexity would  
> also not be that great.  As far as I can see, most of the  

I love this "software" thing.

> above-mentioned limitations would not be significant within the first  
> few orders of magnitude of increase.  And, the beginning of the slope  
> could be in the relatively near future, rather than decades away.

In order to have progress, you first have to have people working on it.

> But that, as usual, is just the opinion of an AGI researcher.  No need  
> to take *that* into account in assessing the factors.  ;-)

Speaking of AGI researchers: do you have a nice publication track of
yours you could dump here? 

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE



More information about the extropy-chat mailing list