[ExI] Limiting factors of intelligence explosion speeds

Richard Loosemore rpwl at lightlink.com
Thu Jan 20 21:45:57 UTC 2011


BillK wrote:
> On Thu, Jan 20, 2011 at 7:27 PM, Richard Loosemore  wrote:
> <big snip>
>> A)  Although we can try our best to understand how an intelligence
>> explosion might happen, the truth is that there are too many interactions
>> between the factors for any kind of reliable conclusion to be reached. This
>> is a complex-system interaction in which even the tiniest, least-anticipated
>> factor may turn out to be the rate-limiting step (or, conversely, the spark
>> that starts the fire).
>>
>> C)  There is one absolute prerequisite for an intelligence explosion,
>> and that is that an AGI becomes smart enough to understand its own
>> design.  If it can't do that, there is no explosion, just growth as usual.
>>  I do not believe it makes sense to talk about what happens *before* that
>> point as part of the "intelligence explosion".
>>
> 
> 
> These two remarks strike me as being very significant.
> 
> *How* does it understand its own design?
> 
> Suppose the AGI has a new design for subroutine 453741.
> How does it know whether it is *better*?
> First step is: How does the AGI make it bug-free?
> Then - Is it better in some circumstances and not others? Does it
> lessen the benefits of some other algorithms?  Does it completely
> block some avenues of further design?
> Should this improvement be implemented before or after other changes?

Your question is about understanding how to boost the functionality (IQ) 
rather than clock speed.

So, first response:  it can get more bang for the buck by simply 
becoming an expert in how to design better electronic circuits.

(In that respect, my point C above was badly worded:  the AGI needs to 
be human-level intelligent, so that at the very least it can build a 
faster computing substrate for its own mind.  At a minimum it should 
also understand *enough* about its own design to make sensible choices 
about what low-level electronics would be good to speed up, with better 
hardware.)

But your point is about how it would augment its own intelligence 
mechanisms, not just its hardware.

The nature of your question -- about improving a particular algorithm -- 
presupposes a certain kind of AGI design in the first place:  on ein 
which a lot hinged on the design of one particular algorithm.

In my approach to AGI, by contrast, there is a swarm of algorithms, none 
of which is critical to performance (the system is designed to degrade 
gracefully), so improvements are done gradually, and are the result of 
empirical investigations.  I would not see any of your above list of 
questions as being a problem, either for human engineers or for the AGI 
itself.



> And then the AGI is trying to do this for hundreds of thousands of routines.
> 
> 
> To say that the AGI knows best and will solve all these problems is
> circular logic.
> First make an AGI, but you need an AGI to solve these problems........

I see nothing circular.  You may have to explain.

At least, I can see an entire class of systems in which it is very much 
NOT circular.



Richard Loosemore




More information about the extropy-chat mailing list