[ExI] Kelly's future

Kelly Anderson kellycoinguy at gmail.com
Mon May 23 07:15:34 UTC 2011


On Sat, May 21, 2011 at 6:20 AM, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> On 17 May 2011 05:52, Kelly Anderson <kellycoinguy at gmail.com> wrote:
>> Now, here is the part that is quite frightening... If someone wishes
>> to create such a monster android, it would probably best be
>> accomplished by beating the crap out of it and generally mistreating
>> it as a young android.
>
> *Or* you can simply emulate the effects. No big deal.

I believe that it is a very big deal indeed. Every remotely successful
artificial intelligence project I am familiar with is not "programmed"
per se, but rather is trained by being exposed to a wide variety of
stimuli in the area of interest, and telling it what the desired
result was. Yes, there are steering elements in AI, but for anything
approaching a satisfying sexual partner, it would need a pretty high
level of intelligence (even for a "ditsy blond" model) and programming
such would likely be very difficult. Training it would likely work, as
in training a dog or a child, and the method that would be easiest to
achieve would be to find a human being that has the traits you desire
(no matter how unusual those traits might be) and figure out what the
psychological roots of that behavior was, then try to duplicate those
roots for the AGI.

>> If such a being gains "human rights", then we're really
>> screwed.
>
> I am not very fond of the very concept of metaphysical "human rights"
> in itself, as opposed to civil rights and liberties, which are those
> granted by the community to what it perceives to be its members. There
> again, I suspect that a sociologically widespread ability to project
> one's qualia on a being make it likely that some rights are granted to
> that being (say, you uploaded wife). When such projection is more
> difficult (say, your car), we are less likely to consider them as
> "citizens".

This may merit it's own thread... but I suspect that some day an AGI
will begin to exhibit a kind of super consciousness. That is, this
being may be able to completely understand our level of consciousness,
but will have its own level of consciousness that it can't entirely
comprehend (just as we can't comprehend our own consciousness.) This
degree of consciousness will likely have qualia that we can't
comprehend at all. Rights will be granted, but by whom and to what
degree we will have to wait and see. I can't imagine that we have
anything close to the wisdom and experience to say very much useful at
this point. Anyone want to grant any level of rights to Watson?

>
>>> I am pretty much persuaded that doing so with humans is
>>> philosophically quite untenable, and practically a frequent source of
>>> ineffective behaviours, and I do not even begin to think what it could
>>> mean for a machine to be programmed to be "happy" (?) while emitting
>>> frustration-like signals.
>>
>> I'm sure there is some kind of twisted porno out there that features
>> women with just these sorts of personality defects.
>
> No, my point is again that we do not know anything about fellow humans
> other than what they signals, and that would also be true for
> machines.

I think we know more than that about other human beings. There are a
lot of constants that can not be changed by any degree of training or
experience. Thus we are born with certain elements already there. Even
in the parts we learn, I think we know quite a bit about what's going
on in other people's heads.

> More than a personality defect, I am more inclined to
> consider a like for rape a contradiction in terms, when what we are
> really dealing with is somebody who is having consensual sex while
> mimicking (some) rape-related behaviours, and yet simultaneously
> exhibiting typical consensual sex reactions (tumescence, lubrication,
> orgasm, endorphine release, subsequent approving comments).

Since there are people who have a fetish where they want their limbs
to be removed (which is fully, completely crazy in my book), there are
in all likelihood women who really truly enjoy rape, even the danger
and pain of it. There are some really nutty folks out there. So, I
don't entirely agree with your statement, although you could do that
too, if that is what is desired.

>> People project personality on cars and other non-human and
>> non-intelligent artifacts today. I cuss at my computer, knowing that
>> it won't do any good (at least yet). But some day, these artifacts
>> will learn to respond to my emotional state. Then it will be even
>> easier to get sucked into the anthropomorphic trap. We will want to
>> program cars to act happy.
>
> Yes, your reply is pertinent, but I wonder... Humans, animals and
> machines alike do not respond to our emotional state, but to
> behaviours it dictates. Let us say that machines may get better at
> interpreting them, from insect level to dog level to fellow human
> level. Even in the human realm, we do not deal with a b/w scenario. An
> infant, a retarded or a member of a radically different culture may
> have a limited or altogether inappropriate response to such
> behaviours.

True enough. With humans, there are some things that appear to be
cross cultural. Smiling, for example, as well as most of the other
facial expressions seem to be mostly independent of culture. This is a
real surprise, but it does seem to be the case. There are other things
that are clearly cultural. Hand signals, words, meanings assigned to
particular colors, clothing styles and a wide variety of behaviors. I
think even our traditional computers will begin to react to our
externally signaled emotional state within the next five years or so.
It will probably end up being built into the operating system
eventually.

Computer2017: "Oh, I see you are confused, can I explain what's going
on or provide some other kind of assistance?"

This kind of responsiveness to emotional state will be particularly
important for sexbots, serverbots, household assistants, elderly
assist robots, etc. It isn't so important in industrial and
agricultural robots.

-Kelly



More information about the extropy-chat mailing list