[ExI] Limiting factors of intelligence explosion speeds

BillK pharos at gmail.com
Thu Jan 20 20:25:14 UTC 2011


On Thu, Jan 20, 2011 at 7:27 PM, Richard Loosemore  wrote:
<big snip>
>
> A)  Although we can try our best to understand how an intelligence
> explosion might happen, the truth is that there are too many interactions
> between the factors for any kind of reliable conclusion to be reached. This
> is a complex-system interaction in which even the tiniest, least-anticipated
> factor may turn out to be the rate-limiting step (or, conversely, the spark
> that starts the fire).
>
> C)  There is one absolute prerequisite for an intelligence explosion,
> and that is that an AGI becomes smart enough to understand its own
> design.  If it can't do that, there is no explosion, just growth as usual.
>  I do not believe it makes sense to talk about what happens *before* that
> point as part of the "intelligence explosion".
>


These two remarks strike me as being very significant.

*How* does it understand its own design?

Suppose the AGI has a new design for subroutine 453741.
How does it know whether it is *better*?
First step is: How does the AGI make it bug-free?
Then - Is it better in some circumstances and not others? Does it
lessen the benefits of some other algorithms?  Does it completely
block some avenues of further design?
Should this improvement be implemented before or after other changes?

And then the AGI is trying to do this for hundreds of thousands of routines.


To say that the AGI knows best and will solve all these problems is
circular logic.
First make an AGI, but you need an AGI to solve these problems........


BillK




More information about the extropy-chat mailing list