[ExI] Limiting factors of intelligence explosion speeds

Eugen Leitl eugen at leitl.org
Thu Jan 20 14:33:47 UTC 2011


On Thu, Jan 20, 2011 at 11:44:30AM +0000, Anders Sandberg wrote:
> One of the things that struck me during our Winter Intelligence workshop  
> on intelligence explosions was how confident some people were about the  
> speed of recursive self-improvement of AIs, brain emulation collectivies  
> or economies. Some thought it was going to be fast in comparision to  
> societal adaptation and development timescales (creating a winner takes  
> all situation), some thought it would be slow enough for multiple  
> superintelligent agents to emerge. This issue is at the root of many key  

It doesn't matter if the emergence population bottleneck is narrow,
it will still radiate after spatial distribution. Unless you're looking
a swarm mind, with individual agents capable of semi-useful but
fast local response, while decisions up the hierarchy are more
informed but also more slow.

> questions about the singularity (one superintelligence or many? how much  

Many. For any nontrivial sized distributed system capable of meaningful
local response it must appear as a nonsingleton.

> does friendliness matter?)

Does the AI have the Buddha nature?

> It would be interesting to hear this list's take on it: what do you  
> think is the key limiting factor for how fast intelligence can amplify  
> itself?

What's the shortest possible gate delay? Add one or two zeroes, that's
the ballpark of a single iteration.

> Some factors that have been mentioned in past discussions:
>    Economic growth rate

If there's a whole planet of hardware to 0wn, you can grow at
about the speed of light, until you run out of resources, and have
actually to touch the physical layer to extrude more substrate.

>    Investment availability

Food availability. Joules, atoms.

>    Gathering of empirical information (experimentation, interacting with 
> an environment)

Virtual environment is pretty fast for co-evolution runs. Evaluating
easy stuff should be possible for ~ms generations.

>    Software complexity

There's no software, at least no more software than we carry between
our ears.

>    Hardware demands vs. available hardvare

There's a whole smorebrod buffet of hardware to take before you
ever have to go to the kitchen.

>    Bandwidth

I have a couple of 10 GBit/s optics on my desktop. Light Peak should
do 100 GBit/s. There's fundamentally no reason to not have TBit/s
links, and hundreds or thousands of these on a local hyperlattice
loop. That is quite a lot of bandwidth.

>    Lightspeed lags

Which is why you pack the switches as densely as possible, and
use the lowest possible complexity for each assembly.

> Clearly many more can be suggested. But which bottlenecks are the most  
> limiting, and how can this be ascertained?

The highest complexity is evaluating nontrivial behaviour. Motorics
is easy, tasks which take people decades take time to check. Multiply
by a few million rounds, that's going to take a while.

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE



More information about the extropy-chat mailing list