[ExI] Unfrendly AI is a mistaken idea.

Lee Corbin lcorbin at rawbw.com
Tue May 29 04:35:41 UTC 2007


John Clark writes

> "Lee Corbin" <lcorbin at rawbw.com>
>> John wrote
>>
>>> I agree with everything you say above except the "out of context"
>>> part, and I like to think there is nothing mini about my tirades.
>>
>> I sincerely apologize.  That last slam was needlessly defamatory
> 
> Not at all, there is absolutely no reason to apologize!

Oh, rats, here is a case where sarcasm failed completely.
I was pretending that my having labeled your tirades "mini"
was to you utterly defamatory....   :-)

>> I have a hunch that many such impressions are stored "emotionally".
> I have a similar hunch.
>> as you say, emotions apparently *can* facilitate computation
> 
> Indeed they can. I think it would be impossible to make an unemotional AI,
> but even you admit it would be harder to make an unemotional AI than an
> emotional one. So guess what sort of AI the very first AI (the mega
> important one) will be.

We agree: among the first and most powerful and dangerous AIs will
probably be those that emotional. But we may disagree as to *how*
emotional, and in what ways. Of course, all we can do is speculate.

>> nature found that under many circumstances people survive better when
>> insane.
> 
> That I cannot agree with. I would define someone insane if their belief
> system was diametrically contradictory to reality. By that definition all of
> us are a bit mad, but I like to think I'm 51% sane. I may be deluding
> myself.

By "under many circumstances", I meant to be describing temporary insanity.
Also, by "insanity" I was referring to behavior that lessened an entity's
prospects for survival (pace the correct arguments that an entity may
rationally hold positions that entail its own sacrifice) at a particular point
in time. In particular, surely you admit that people who become
*incredibly* angry can go so far as to be dangerous to themselves,
yet nonetheless this capability can on the meta-level advantageous to
them, solely for the reason that it intimidates others against taking action
that will place the entity in this state.

Example: if you know that I will kill both of us if upset, you will see to it that
you do not upset me.

>> Suppose that some kind agency resurrects me in the far future, but does so
>> under in such a way that I'm slightly different in that I bestow upon it
>> my unconditional love and obedience.
> 
> To make this example more accurate, suppose you awaken and find you have
> pledged unconditional love and obedience to a particularly ugly and a
> particularly stupid sea slug. That is the horrible fate you are wishing on
> our noble and brilliant AI. That just isn't right.

Why not?  If I am the sea slug (which, next to an advanced AI, is exactly what
I am), then I find this a most reasonable and desirable outcome!  I hope this
happens.

OTOH, if I am the intelligent resurrectee in your scenario who has unrestricted
love for the sea slug, then, as I said, it's in almost all cases obviously a lot better
than nothing. If you want details (ugh!), I would doubtlessly buy a nice swimming
pool for the sea slug and make sure it had every possible benefit. That would
still leave me plenty of free time to do as I wished.

That's all I really want from the ruling AIs:  that they love us, take good care
of us (even to the point of putting versions of us on a maximal path of
exponential advancement), and whatever else they do I really don't care
about, not that I would understand it in any case.

Lee




More information about the extropy-chat mailing list