[ExI] Yes, the Singularity is the greatest threat to humanity

Eugen Leitl eugen at leitl.org
Mon Jan 17 07:46:55 UTC 2011


On Mon, Jan 17, 2011 at 12:17:15AM +0000, Anders Sandberg wrote:

> There are far more elegant ways of ensuring friendliness than assuming  
> Kantianism to be right or fixed axioms. Basically, you try to get the  
> motivation system to not only treat you well from the start but also be  
> motivated to evolve towards better forms of well-treating (for a more  
> stringent treatment, see Nick's upcoming book on intelligence  
> explosions). Unfortunately, as Nick, Randall *and* Eliezer all argued  
> today (talks will be put online on the FHI web ASAP) getting this  
> friendliness to work is *amazingly* hard. Those talks managed to  

Well, duh. I guess the next step would be to admit that a scalable
friendliness metric is undefined, nevermind can't be constrained
in the course of open-ended system evolution. (Without it ceasing
to be open-ended evolution, aka Despot from Hell).

It's just a Horribly Bad Idea. One of the worst I've ever heard of,
actually.

> *reduce* my already pessmistic estimate of the ease of implementing  
> friendliness and increase my estimate of the risk posed by a  
> superintelligence.
>
> This is why I think upload-triggered singularities (the minds will be  
> based on human motivational templates at least) or any singularity with  
> a relatively slow acceleration (allowing many different smart systems to  
> co-exist and start to form self-regulating systems AKA societies) are  
> vastly more preferable than hard takeoffs. If we have reasons to think  

Yay. 100% on the same page. 

> hard takeoffs are even somewhat likely, then we need to take  
> friendliness very seriously, try to avoid singularities altogether or  
> move towards the softer kinds. Whether we can affect things enough to  
> influence their probabilities is a good question.
>
> Even worse, we still have no good theory to tell us the likeliehood of  
> hard takeoffs compared to soft (and compared to no singularity at all).  

Since it's about a series of inventions, only the few first of them
understandable to us (AI, molecular circuitry, nanotechnology) I don't
think there will be much to hang probabilities onto. The only way to
know for sure is to do it.

> Hopefully we can build a few tomorrow...

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE



More information about the extropy-chat mailing list