[ExI] Limiting factors of intelligence explosion speeds

Richard Loosemore rpwl at lightlink.com
Fri Jan 21 14:30:39 UTC 2011


Stefano Vaj wrote:
> On 20 January 2011 20:27, Richard Loosemore <rpwl at lightlink.com> wrote:
>> Anders Sandberg wrote:
>> E)  Most importantly, the invention of a human-level, self-understanding
>> AGI would not lead to a *subsequent* period (we can call it the
>> "explosion period") in which the invention just sits on a shelf with
>> nobody bothering to pick it up.
> 
> Mmhhh. Aren't we already there? A few basic questions:
> 
> 1) Computers are vastly inferior to humans in some specific tasks, yet
> vastly superior in others. Why human-like features would be so much
> more crucial in defining the computer "intelligence" than, say, faster
> integer factorisation?

Well, remember that the hypothesis under consideration here is a system 
that is capable of redesigning itself.

"Human-level" does not mean identical to a human in every respect, it 
means smart enough to understand everything that we understand. 
Something with general enough capabilities that it could take a course 
in AGI then converse meaningfully with its designers about all the 
facets of its own design.  And, having done that, it would then be 
capable of working on an improvement of its own design.

So, to answer your question, faster integer factorization would not be 
enough to allow it to do that self-redesign.




> 2) If the Principle of Computational Equivalence is true, what are we
> really all if not "computers" optimised for, and of course executing,
> different programs? Is AGI ultimately anything else than a very
> complex (and, on contemporary silicon processor, much slower and very
> inefficient) emulation of typical carbo-based units' data processing?

The main idea of building an AGI would be to do it in such a way that we 
understood how it worked, and therefore could (almost certainly) think 
of ways to improve it.

Also, if we had a working AGI we could do something that we cannot do 
with human brains:  we could inspect and learn about any aspect of its 
function in real time.

These two factors - the understanding and the ability to monitor - would 
put us in a radically different situation than we are now.

There are other factors that would add to these.  One concerns the AGI's 
ability to duplicate itself, after acquiring some knowledge.  In the 
case of a human, a single, world-leading expert in some field would be 
nothing more than one expert.  But if an AGI became a world expert, she 
could then duplicate herself a thousand times over and work with her 
sisters as a team (assuming that the problem under attack would benefit 
from a big team).

Lastly, there is the fact that an AGI could communicate with its sisters 
on high-bandwidth channels, as I mentioned in my essay.  We cannot do 
that.  It would make a difference.

> 3) What is the actual level of self-understanding of the average
> biological, or even human, brain? What would "self-understanding" mean
> for a computer? Anything radically different from a workstation
> utilised to design the next Intel processor? And if anything more is
> required, what difference would it make to put simply a few neurons in
> a PC? a whole human brain? a man (fyborg-style) at the keyboard? This
> would not really slow down things one bit, because as soon as
> something become executable in a faster fashion on the rest of the
> "hardware", you simply move the relevant processes from one piece of
> hardware to another, as you do today with CPUs and GPUs. In the
> meantime, everybody does what he does best, and already exhibit at
> increasing performance level whatever "AGI" feature one may think
> of...


I think that my above answer addresses this point too.... ?

A workstation that is used to design the next Intel processor has zero 
self-understanding, because it cannot autonomously start and complete a 
project to redesign itself.

It would just be a tool added on to a human.

Overall, a planet with one million original, creative human scientists 
on it is just that.

But a planet with those same scientists, plus a viable AGI, can become, 
almost overnight, a planet with a few billion more creative scientists. 
That is not just business as usual, I think.



Richard Loosemore






More information about the extropy-chat mailing list