[ExI] Chain of causes

Eugen Leitl eugen at leitl.org
Fri Mar 18 13:19:09 UTC 2011

On Fri, Mar 18, 2011 at 09:03:35AM -0400, Richard Loosemore wrote:

> If this is an accurate picture of where we are, then it could be that  
> much of the work going on in AGI will be wasted, because it is deeply  
> entwined with the understandability of AGI mechanisms.  People design  

You can say that again. "A heap of straw" would come to mind (yes,
I know he didn't mean that as a retraction).

> such mechanisms because the mechanisms look like they will do the  
> required job, but it could be that they only mechanisms that really work  
> are the ones that do *not* look like they will do the job.

Decidability and computability are not just words. Any useful 
system must have autonomy (micromanagement is not an option,
both for scale and comprehensibility reasons), and autonomous
system is out of control.

It's not a bug, it's a feature. Learn to love you inner
chaotic neutral, and stop worrying.

> This is just the reverse of the "they will do as they damn well please"  
> problem, which happens when we design it as if it ought to work as we  

Design is sterile, since out of reach. The only thing in reach
is designing boundary conditions for emergence, trying engineering further
evolution of is a fool's game.

> desire it, but then it does as it pleases.  The reverse is that the only  
> design that actually does do what we want it to do, has a mechanism that  
> does not look as though it should really work.

Did I get that right? Did you just color large swathes of approach
space sterile?

Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

More information about the extropy-chat mailing list