[ExI] Unfrendly AI is a mistaken idea.
Lee Corbin
lcorbin at rawbw.com
Fri Jun 8 00:49:39 UTC 2007
Jeffrey (A B) writes
> John Clark wrote:
>
> > "No, a computer doesn't need emotions,
> > but a AI must have them."
>
> An AI *is* a specific computer. If my desktop
> doesn't need an emotion to run a program or
> respond within it, why "must" an AI have emotions?
In these confusing threads, an AI is often taken
to mean a vastly superhuman AI which by definition
is capable of vastly outhinking humans.
Formerly, I had agreed with John because at
least for human beings, emotion sometimes
plays an important part in what one would
think of as purely intellectual functioning. I was
working off the Damasio card experiments,
which seem to show that humans require---for
full intellectual power---some emotion.
However, Stathis has convinced me otherwise,
at least to some extent.
> A non-existent motivation will not "motivate"
> itself into existence. And an AGI isn't
> going to pop out of thin air, it has to be
> intentionally designed, or it's not going to
> exist.
At one point John was postulating a version
of an AGI, e.g. version 3141592 which was
a direct descendant of version 3141591. I
took him to mean that the former was solely
designed by the latter, and was *not* the
result of an evolutionary process. So I
contended that 3141592---as well as all
versions way back to 42, say---as products
of truly *intelligent design* need not have
the full array of emotions. Like Stathis, I
supposed that perhaps 3141592 and all its
predecessors might have been focused, say,
on solving physics problems.
(On the other hand I did affirm that if a
program was the result of a free-for-all
evolutionary process, then it likely would
have a full array of emotions---after all,
we and all the higher animals have them.
Besides, it makes good evolutionary
sense. Take anger, for example. In an
evolutionary struggle, those programs
equipped with the temporary insanity
we call "anger" have a survival advantage.)
> I suppose it's *possible* that a generic
> self-improving AI, as it expands its knowledge and
> intelligence, could innocuously "drift" into coding a
> script that would provide emotions *after-the-fact*
> that it had been written.
:-) I don't even agree with going *that* far!
A specially crafted AI---again, not an evolutionarily
derived one, but one the result of *intelligent design*
(something tells me I am going to be sorry for using
that exact phase)---cannot any more drift into
having emotions than in can drift into sculpting
David out of a slab of stone. Or than over the
course of eons a species can "drift" into having
an eye: No! Only a careful pruning by mutuation
and selection can give you an eye, or the ability
to carve a David.
> But that will *not* be an *emotionally-driven*
> action to code the script, because the AI will
> not have any emotions to begin with (unless they
> are intentionally programmed in by humans).
I would less this pass without comment, except
that in all probability, the first truly sentient human-
level AIs will very likely be the result of evolutionary
activity. To wit, humans set up conditions in which
a lot of AIs can breed like genetic algorithms,
compete against each other, and develop whatever
is best to survive (and so in that way acquire emotion).
Since this is *so* likely, it's a mistake IMHO to
omit mentioning the possibility.
> That's why it's important to get its starting
> "motivations/directives" right, because if
> they aren't the AI mind could "drift" into
> a lot of open territory that wouldn't be
> good for us, or itself. Paperclip style.
I would agree that the same cautions that
apply to nanotech are warranted here.
To the degree that an AI---superhuman
AGI we are talking about---has power,
then by our lights it could of course drift
(as you put it) into doing things not to
our liking.
Lee
More information about the extropy-chat
mailing list