[ExI] Losing control (was: Unfrendly AI is a mistaken idea.)

Stathis Papaioannou stathisp at gmail.com
Sun Jun 17 08:57:57 UTC 2007


On 17/06/07, Eliezer S. Yudkowsky <sentience at pobox.com> wrote:

Someone says they want to hotwire their brain's pleasure center; they
> say they think it'll be fun.  A nearby AI reads off their brain state
> and announces unambiguously that they have no idea what'll actually
> happen to them - they're definitely working based on mistaken
> expectations.  They're too stubborn to listen to warnings, and they're
> picking up the handy neural soldering iron (they're on sale at
> Wal-Mart, a very popular item).  What's the moral course of action?
> For you?  For society?  For a superintelligent AI?


I, society or the superintelligent AI should inform the person of the risks
and benefits, then let him do as he pleases.



-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070617/c31a3371/attachment.html>


More information about the extropy-chat mailing list