[ExI] Limiting factors of intelligence explosion speeds
Eugen Leitl
eugen at leitl.org
Thu Jan 20 21:41:13 UTC 2011
On Thu, Jan 20, 2011 at 08:25:14PM +0000, BillK wrote:
> These two remarks strike me as being very significant.
>
> *How* does it understand its own design?
>
> Suppose the AGI has a new design for subroutine 453741.
Suppose you have a new design for a cortical column.
> How does it know whether it is *better*?
Probably, by building a critter that incorporates it,
and testing its performance. Wait, how is this different
from what you're doing right now? Sure, your generations
are decades, not ms, but it's all the same thing.
> First step is: How does the AGI make it bug-free?
The cats get the slow mice. The fast mice escape the slow cats.
> Then - Is it better in some circumstances and not others? Does it
Ah, but if you encounter the wrong circumstances, you will
be outperformed.
> lessen the benefits of some other algorithms? Does it completely
> block some avenues of further design?
The slow mice still get born. They don't get very old, though.
> Should this improvement be implemented before or after other changes?
You're thinking like a rational human. The process is not rational,
nor is it human. It is not even sentient.
> And then the AGI is trying to do this for hundreds of thousands of routines.
Millions of generations. Populations sizes of giga to tera.
>
> To say that the AGI knows best and will solve all these problems is
> circular logic.
> First make an AGI, but you need an AGI to solve these problems........
Backtrace the chain of event starting with you reading this message.
Way back. All the way back.
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
More information about the extropy-chat
mailing list