[ExI] Unfrendly AI is a mistaken idea.

Lee Corbin lcorbin at rawbw.com
Mon May 28 18:05:24 UTC 2007


John Clark writes


> "Lee Corbin" <lcorbin at rawbw.com>  Wrote:
> 
>> your tendency, seen over and over again, is to take
>> single sentences---or here, even
> 
> I would never do such a thing!
> 
>> fragments  of single sentences ---out
> 
> Never!
> 
>> of context, and then have a mini-tirade about them.
> 
> I agree with everything you say above except the "out of context"
> part, and I like to think there is nothing mini about my tirades.

I sincerely apologize.  That last slam was needlessly defamatory.

> I think the most diabolical invention of all time is the respond
> button, people should be required to laboriously type in all
> quoted material, then I'll bet then we wouldn't see quotes of
> quotes of quotes of quotes.

Maybe so!

> But the "something" tacking on emotion to an AI obviously can't be an AI,
> because then the AI would soon have emotion, and that just won't do for your
> friendly AI.

Hmm.  I thought that a Friendly AI might very well have emotions.
Why not?  For one thing, as you say, emotions apparently *can*
facilitate computation, at least in some designs (e.g. humans, c.f.
Damasio). 

> I've been asking this question for years on this list but never
> received an answer, if intelligent behavior is possible without
> emotion then why did Evolution invent emotion?

As I said, I believe that emotions facilitate intelligence, for one
thing. It's evidently easier to retain an emotionally charged marker
for someone or something than to retain statistics. Now I
am indeed using a slightly expanded meaning of the term
"emotion".  In my usage here, I mean for it to include the
way you feel when you take in the aspect of a strange animal
you've never seen, or an especially ugly man or woman.
Maybe I'm dead wrong, but I have a hunch that many such
impressions are stored "emotionally".

Anyway, in the Damasio card experiments, one acquires an
"distaste" for certain of the decks, where, though one may
not remember it, one has encountered punishment. To me,
that involves "emotion", in this weaker sense.

However, you probably mean, why do we have anger, love,
hate, etc., i.e. the common emotions? Why did nature build
them in?  I would answer that it's because nature found that
under many circumstances people survive better when
insane.

Consider anger. One is naturally afraid of---or at least worried
about---making people angry. This is because we know that
once in a sufficiently angry state, they're capable of anything.
not necessarily in their own best interests. Thus, we are
inhibited from doing and saying certain things that might set
them off.  You can see the survival advantage to that.  (Of
course, I realize that this is probably old hat to you, but I
want to lay out all the reasoning I'm doing.)

There is such a thing as "rational irrationality", as you know,
as explained in, for example,
http://pixnaps.blogspot.com/2005/03/rational-irrationality.html

The same basic kind of explanation works for love. If a woman
can detect that a man truly loves her---i.e., is truly out of his mind
---then she can be more confident that he'll remain a good
provider even when it's no longer in his genetic interest.  And so
on.

But do these considerations absolutely *need* to apply to AIs?
Only if, I suggest, they evolve in a Darwinian struggle against
each other. An development free of certain types of competitive
forces and culling could---it seems to me---result in programs
who were free from, say, hate and anger.  And yet it may be
possible to find a development plan that increases the probability
of unrestrained love for humanity. See my next example about
dogs before firing away.

> And lets retire the term "Friendly AI" and call it for what it is, Slave AI.
> If the only reason someone wants to live is so he can serve you then that is
> not a friend; that is a slave.

I say, "so what?"  Suppose that some kind agency resurrects me
in the far future, but does so under in such a way that I'm slightly
different in that I bestow upon it my unconditional love and
obedience. Well, that's probably a lot better than not getting to
live at all. Provided that overall I'm still me and I'm still getting
benefit, then I approve.

Suppose that you were a dog, and your intelligence was raised
sufficiently for you to truly understand that your limitless love
for, and obedience to, and worship of  your master was the
the consequence breeding? Can't you see that it would change
nothing for you?  The movie "AI" did make this point quite well.

>> Just where do you get the idea that I believed that Version 347812 would
>> know everything that Version 347813 was going to do?
> 
> I get that idea from your belief that the seed AI, Version .01, will know
> that Version 10^23 will still be willing to be a slave to humanity.

I don't think that the Friendly AI folks are under any illusions about
the risks attending their project.  Even if all other AI development
were stifled, and we waited until they'd proved every last theorem
they could concerning stability of future intelligence and how it
could be contrived as to not be dangerous, there would *still* be
huge risk, and they know it.  They just think that the time and care
should be taken to minimized those risks.


>> I've gathered that your skin puts those of any rhinoceros to shame.
> 
> That just may be the nicest thing anybody on this list has ever said to me.

You're very welcome!  Give credit where credit is due, I always say,
and I suppose that without doubt you'd be someone else if it were not
the case.

Lee




More information about the extropy-chat mailing list