[ExI] Kelly's future

Stefano Vaj stefano.vaj at gmail.com
Sat May 21 12:20:41 UTC 2011


On 17 May 2011 05:52, Kelly Anderson <kellycoinguy at gmail.com> wrote:
> If you are saying that such an android may not actually "like"
> something, but could only be judged by their interaction with others,
> then yes, that is definitely one way to judge. That being said, I
> think it is reasonable to conjecture that if a human being can be
> "programmed" to a particular effect, that it would be reasonable to
> believe that you could also create an AGI with similar effects. So, if
> you can find a real woman who liked being raped, and playing the rape
> victim game, and really liked it very rough, that it would be
> conceivable to believe that you could create an AGI with similar
> external behaviors to such a woman.

Absolutely. Only, as long as we do not subjectively perceive a degree
of Turing-like impredictability in the android, it become hard to
project our own qualia on it, and we are likely to consider its
reactions on the same basis of that of, say, a car, a hypnotised human
or a corpse moved by some Galvani-like effects.

> Now, here is the part that is quite frightening... If someone wishes
> to create such a monster android, it would probably best be
> accomplished by beating the crap out of it and generally mistreating
> it as a young android.

*Or* you can simply emulate the effects. No big deal.

> If such a being gains "human rights", then we're really
> screwed.

I am not very fond of the very concept of metaphysical "human rights"
in itself, as opposed to civil rights and liberties, which are those
granted by the community to what it perceives to be its members. There
again, I suspect that a sociologically widespread ability to project
one's qualia on a being make it likely that some rights are granted to
that being (say, you uploaded wife). When such projection is more
difficult (say, your car), we are less likely to consider them as
"citizens".

>> I am pretty much persuaded that doing so with humans is
>> philosophically quite untenable, and practically a frequent source of
>> ineffective behaviours, and I do not even begin to think what it could
>> mean for a machine to be programmed to be "happy" (?) while emitting
>> frustration-like signals.
>
> I'm sure there is some kind of twisted porno out there that features
> women with just these sorts of personality defects.

No, my point is again that we do not know anything about fellow humans
other than what they signals, and that would also be true for
machines. More than a personality defect, I am more inclined to
consider a like for rape a contradiction in terms, when what we are
really dealing with is somebody who is having consensual sex while
mimicking (some) rape-related behaviours, and yet simultaneously
exhibiting typical consensual sex reactions (tumescence, lubrication,
orgasm, endorphine release, subsequent approving comments).

> People project personality on cars and other non-human and
> non-intelligent artifacts today. I cuss at my computer, knowing that
> it won't do any good (at least yet). But some day, these artifacts
> will learn to respond to my emotional state. Then it will be even
> easier to get sucked into the anthropomorphic trap. We will want to
> program cars to act happy.

Yes, your reply is pertinent, but I wonder... Humans, animals and
machines alike do not respond to our emotional state, but to
behaviours it dictates. Let us say that machines may get better at
interpreting them, from insect level to dog level to fellow human
level. Even in the human realm, we do not deal with a b/w scenario. An
infant, a retarded or a member of a radically different culture may
have a limited or altogether inappropriate response to such
behaviours.

-- 
Stefano Vaj



More information about the extropy-chat mailing list