[ExI] Kelly's future

Kelly Anderson kellycoinguy at gmail.com
Tue May 17 03:52:10 UTC 2011


On Mon, May 16, 2011 at 11:55 AM, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> On 13 May 2011 00:05, Kelly Anderson <kellycoinguy at gmail.com> wrote:
>> Super Bowl or Star Trek with me. Perhaps one brain can't be HAPPY at
>> ALL those things, so put multiple AGIs in one body, and problem
>> solved.
>> ...
>> I think you
>> could design an android that LIKED being a rape victim, but also LIKED
>> acting like it didn't want to be a rape victim. In other words, one
>> that a rapist would enjoy going after, but who wasn't damaged in the
>> process. It's all about how you wire up the reward system.
>
> Mmhhh. I think we are flatly in the field of qualia, and of
> hallucinating (in the PNL sense) our own subjective experiences on
> others.

If you are saying that such an android may not actually "like"
something, but could only be judged by their interaction with others,
then yes, that is definitely one way to judge. That being said, I
think it is reasonable to conjecture that if a human being can be
"programmed" to a particular effect, that it would be reasonable to
believe that you could also create an AGI with similar effects. So, if
you can find a real woman who liked being raped, and playing the rape
victim game, and really liked it very rough, that it would be
conceivable to believe that you could create an AGI with similar
external behaviors to such a woman.

I think it is very safe to say that such a woman is an extreme rarity
(thank goodness) but that such women do, incredibly, exist. The
unfortunate thing is that most such women would probably exhibit
personality disorders that would make them poorly suited to other
aspects of relationships, and generally getting along well in the
world. That is why psychology invented personality disorders in the
first place. Enjoying something that others hate is close to the core
of a number of personality disorders.

Now, here is the part that is quite frightening... If someone wishes
to create such a monster android, it would probably best be
accomplished by beating the crap out of it and generally mistreating
it as a young android. That is, early in it's training, you mistreat
it to the end of creating a strange disordered creature that meets the
above stated design goal. If such a creature ends up being smarter
than people, or is used later to some other purpose, then you have a
real problem. If such a being gains "human rights", then we're really
screwed.

Will the ACLU defend the rights of monster AGIs produced for these
kinds of sick twisted purposes?

Kinky sex is only one area for which sick twisted personalities might
be desired (in a context). There are also murderous personalities that
might have military applications, or assassins. One might create a
paparazzi personality to get the right picture. Someone might want an
argumentative personality to challenge their thinking. You might
create the perfect McDonald's employee. There are a lot of examples
where for a particular purpose, you want a kind of idiot savant that
is created for a purpose that as a general personality would be a
complete failure.

The world is, very unfortunately, full of real people with these
problems. I think that it is undesirable, but perhaps unavoidable,
that we will create AGIs with personality disorders. (The current
twisted context being a great real world example of why someone might
create such a monster.) If we then grant those twisted personalities
human rights, and they switch careers, then there will continue to be
a great amount of suffering in the world.

> I am pretty much persuaded that doing so with humans is
> philosophically quite untenable, and practically a frequent source of
> ineffective behaviours, and I do not even begin to think what it could
> mean for a machine to be programmed to be "happy" (?) while emitting
> frustration-like signals.

I'm sure there is some kind of twisted porno out there that features
women with just these sorts of personality defects.

> The only thing i can say is that if it is *programmed* to do so, at a
> sociological level we are not likely to project our own "happiness",
> "like", "frustration" experiences on it much more that we currently do
> with our car or with natural phenomena, no matter how persuasive its
> emulation of human signals were to be.

I'm not sure I understand what you're saying here, so if my answer
doesn't make sense, keep trying to get your point across.

People project personality on cars and other non-human and
non-intelligent artifacts today. I cuss at my computer, knowing that
it won't do any good (at least yet). But some day, these artifacts
will learn to respond to my emotional state. Then it will be even
easier to get sucked into the anthropomorphic trap. We will want to
program cars to act happy.

-Kelly



More information about the extropy-chat mailing list