[ExI] Losing control (was: Unfrendly AI is a mistaken idea.)
stathisp at gmail.com
Sun Jun 17 08:57:57 UTC 2007
On 17/06/07, Eliezer S. Yudkowsky <sentience at pobox.com> wrote:
Someone says they want to hotwire their brain's pleasure center; they
> say they think it'll be fun. A nearby AI reads off their brain state
> and announces unambiguously that they have no idea what'll actually
> happen to them - they're definitely working based on mistaken
> expectations. They're too stubborn to listen to warnings, and they're
> picking up the handy neural soldering iron (they're on sale at
> Wal-Mart, a very popular item). What's the moral course of action?
> For you? For society? For a superintelligent AI?
I, society or the superintelligent AI should inform the person of the risks
and benefits, then let him do as he pleases.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat