[ExI] Kelly's future

Stefano Vaj stefano.vaj at gmail.com
Mon May 16 17:55:23 UTC 2011


On 13 May 2011 00:05, Kelly Anderson <kellycoinguy at gmail.com> wrote:
> I want my partner to be HAPPY to take out the garbage when
> it's time to do that, and HAPPY to go to bed, and HAPPY to watch the
> Super Bowl or Star Trek with me. Perhaps one brain can't be HAPPY at
> ALL those things, so put multiple AGIs in one body, and problem
> solved.
> ...
> I think you
> could design an android that LIKED being a rape victim, but also LIKED
> acting like it didn't want to be a rape victim. In other words, one
> that a rapist would enjoy going after, but who wasn't damaged in the
> process. It's all about how you wire up the reward system.

Mmhhh. I think we are flatly in the field of qualia, and of
hallucinating (in the PNL sense) our own subjective experiences on
others.

I am pretty much persuaded that doing so with humans is
philosophically quite untenable, and practically a frequent source of
ineffective behaviours, and I do not even begin to think what it could
mean for a machine to be programmed to be "happy" (?) while emitting
frustration-like signals.

The only thing i can say is that if it is *programmed* to do so, at a
sociological level we are not likely to project our own "happiness",
"like", "frustration" experiences on it much more that we currently do
with our car or with natural phenomena, no matter how persuasive its
emulation of human signals were to be.

-- 
Stefano Vaj



More information about the extropy-chat mailing list