[ExI] fwd: Limiting factors of intelligence explosion speeds

spike spike66 at att.net
Thu Jan 27 16:38:23 UTC 2011


 

Forwarded message:

 

From: Omar Rahman

 

Original message follows:

 

 

 

From: Anders Sandberg <anders at aleph.se>







One of the things that struck me during our Winter Intelligence workshop 
on intelligence explosions was how confident some people were about the 
speed of recursive self-improvement of AIs, brain emulation collectivies 
or economies. Some thought it was going to be fast in comparision to 
societal adaptation and development timescales (creating a winner takes 
all situation), some thought it would be slow enough for multiple 
superintelligent agents to emerge. This issue is at the root of many key 
questions about the singularity (one superintelligence or many? how much 
does friendliness matter?)

It would be interesting to hear this list's take on it: what do you 
think is the key limiting factor for how fast intelligence can amplify 
itself?

Some factors that have been mentioned in past discussions:
   Economic growth rate
   Investment availability
   Gathering of empirical information (experimentation, interacting 
with an environment)
   Software complexity



 

'Software complexity' stands out to me as the big limiting factor.  Assuming
it applies at all to machine intelligences, Godel's incompleteness theorem
would seem to imply that once this thing starts off it can't just go forward
with some 'ever greater intelligence' recipe. Add to this the cognitive load
of managing a larger and larger system and the system will have to optimize
and, oddly enough, 'automate' subprocesses; much as we don't consciously
breath, but we can control it if we wish.

 

Once this thing hits it's 'Godel Limit' if it wishes to progress further it
will be forced into the 'gathering of empirical information', at this stage
it is unknown how long it will take for a new axiom to be discovered.

 

   Hardware demands vs. available hardvare
   Bandwidth
   Lightspeed lags

Clearly many more can be suggested. But which bottlenecks are the most 
limiting, and how can this be ascertained?




 

 

Many people seem to assume that greater intelligence is simple a matter of
'horsepower' or more processing units or what have you. My analogy would be
a distribution network where you get more and more carrying capacity as you
add more trucks but once you reach the ocean adding more trucks won't help.
Unless you fill the ocean with trucks I suppose. =D

 

Does anyone care to address my main assumption? Does Godel's incompleteness
theorem apply?

 

Regards,

 

Omar Rahman

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110127/291e6d5a/attachment.html>


More information about the extropy-chat mailing list