[ExI] Losing control (was: Unfrendly AI is a mistaken idea.)

Samantha Atkins sjatkins at mac.com
Sun Jun 17 17:51:25 UTC 2007


On Jun 17, 2007, at 1:02 AM, Eliezer S. Yudkowsky wrote:
>> o.
>
> Cheap slogan.  What about five-year-olds?  Where do you draw the line?
>
> Someone says they want to hotwire their brain's pleasure center;  
> they say they think it'll be fun.  A nearby AI reads off their brain  
> state and announces unambiguously that they have no idea what'll  
> actually happen to them - they're definitely working based on  
> mistaken expectations.  They're too stubborn to listen to warnings,  
> and they're picking up the handy neural soldering iron (they're on  
> sale at Wal-Mart, a very popular item).  What's the moral course of  
> action? For you?  For society?  For a superintelligent AI?


Good question and difficult to answer.  Do you protect everyone cradle  
to [vastly remote] grave from their own stupidity?  How exactly do  
they grow or become wiser if you do?  As long as they can recover  
(which can be very advanced in the future) to be a bit smarter I am  
not at all sure that direct intervention is wise or moral or best for  
its object.


- samantha



More information about the extropy-chat mailing list