[ExI] Unfrendly AI is a mistaken idea.

A B austriaaugust at yahoo.com
Fri Jun 8 17:18:51 UTC 2007


Lee wrote:

> "In these confusing threads, an AI is often taken
> to mean a vastly superhuman AI which by definition
> is capable of vastly outhinking humans."

Yep. But a superhuman AGI is still a computer. If my
desktop doesn't require an emotion in order to open
Microsoft Office, or to run a virus-scan when I
instruct it to (AKA "motivate" it to), then why *must*
an AGI designated supercomputer have an emotion in
order to run the AGI engine program when I instruct it
to? I don't think it does.

> "Formerly, I had agreed with John because at
> least for human beings, emotion sometimes
> plays an important part in what one would 
> think of as purely intellectual functioning. I was
> working off the Damasio card experiments, 
> which seem to show that humans require---for
> full intellectual power---some emotion."

But more often than not, emotion clouds judgment and
rationality. Believe me, I should know. Evolution
tacked-on emotion because it accidentally happened to
be (aggregately) useful for animal survival and
*reproduction* in particular - which is all that
evolution "cares" about. Evolution didn't desire to
create intelligent beings, because evolution doesn't
desire anything. Emotion is *not* the basis of thought
or consciousness - that can't be stressed enough. And
you may have noticed that humanity seems to thrive on
irrationality. It doesn't seem to require much
rationality or even much intelligence to attract a
person into having sex. It's just that you can't have
emotion until you have consciousness, and you can't
have consciousness until you have a threshold baseline
intelligence. Thanks a lot evolution! [Shaking Fist].
We could have used that extra skull volume for greater
intelligence and rationality!

> "(On the other hand I did affirm that if a
> program was the result of a free-for-all
> evolutionary process, then it likely would
> have a full array of emotions---after all, 
> we and all the higher animals have them.
> Besides, it makes good evolutionary 
> sense. Take anger, for example. In an
> evolutionary struggle, those programs
> equipped with the temporary insanity
> we call "anger" have a survival advantage.)"

But an AGI isn't likely to be derived solely or even
mostly from genetic programming, IMO. If it were that
easy, we'd have an AGI already. :-) Think of the
awesome complexity of a single atom. Now imagine
describing its behavior fully with nothing but
algorithms. That's a boat-load of *correct*
algorithms. That would be a task so Herculean, that
it's almost certainly not feasible any time in the
near future.

 ":-)  I don't even agree with going *that* far!
> A specially crafted AI---again, not an
> evolutionarily
> derived one, but one the result of *intelligent
> design*
> (something tells me I am going to be sorry for using
> that exact phase)---cannot any more drift into
> having emotions than in can drift into sculpting
> David out of a slab of stone.  Or than over the
> course of eons a species can "drift" into having
> an eye:  No!  Only a careful pruning by mutuation
> and selection can give you an eye, or the ability
> to carve a David."

I don't know. I think that a generic self-improving
AGI could easily drift into undesirable areas (for us
and itself) if its starting directives (=motivations)
aren't carefully selected. After all it will be
re-writing and expanding its own mind. The drift would
probably be subtle (still close to the directives) to
begin with, but could become increasingly divergent as
more internal changes are made. Let's be careful in
our selection of directives, shall we? :-)

And animals did genetically drift into having an eye,
that's how biological evolution works. And we already
have artificial machines with vision and artistic
"ability". And they weren't created by eons of orgies
of Dell desktops. They were created by human
ingenuity. :-) 

> "I would less this pass without comment, except
> that in all probability, the first truly sentient
> human-
> level AIs will very likely be the result of
> evolutionary
> activity.  To wit, humans set up conditions in which
> a lot of AIs can breed like genetic algorithms, 
> compete against each other, and develop whatever
> is best to survive (and so in that way acquire
> emotion).
> Since this is *so* likely, it's a mistake IMHO to 
> omit mentioning the possibility."

My guess is that that isn't likely. You'd have to
already have baseline AGI agents in order to compete
with each other to that end. If the AI agents are
narrow, then the one that wins will be the best chess
player of the bunch. I'm not absolutely sure though.
Perhaps one of the AGI programmers here can chime in
on this one. Although I suppose that you could have
some baseline AGI's compete with each other. I'm not
sure that's a good idea though... do we want angry,
aggressive AGI's at the end? Evolution is not the
optimal designer after all.

> "I would agree that the same cautions that
> apply to nanotech are warranted here. 
> To the degree that an AI---superhuman
> AGI we are talking about---has power,
> then by our lights it could of course drift
> (as you  put it) into doing things not to
> our liking."

Yep. And the Strong AI existential risk seems to be
the one receiving the least cautious attention by
important people. We should try to change that if we
can. For example, the US government is finally
beginning to publicly acknowledge that we need to be
carefully pro-active about nanotech, without
relinquishing it. Not that I'm encouraging government
oversight and control in particular, just pointing out
an example.

Best,

Jeffrey Herrlich



--- Lee Corbin <lcorbin at rawbw.com> wrote:

> Jeffrey (A B) writes
> 
> 
> > John Clark wrote:
> > 
> > > "No, a computer doesn't need emotions,
> > > but a AI must have them."
> > 
> > An AI *is* a specific computer. If my desktop
> > doesn't need an emotion to run a program or
> > respond within it, why "must" an AI have emotions?
> 
> In these confusing threads, an AI is often taken
> to mean a vastly superhuman AI which by definition
> is capable of vastly outhinking humans. 
> 
> Formerly, I had agreed with John because at
> least for human beings, emotion sometimes
> plays an important part in what one would 
> think of as purely intellectual functioning. I was
> working off the Damasio card experiments, 
> which seem to show that humans require---for
> full intellectual power---some emotion.
> 
> However, Stathis has convinced me otherwise,
> at least to some extent. 
> 
> > A non-existent motivation will not "motivate"
> > itself into existence. And an AGI isn't
> > going to pop out of thin air, it has to be
> > intentionally designed, or it's not going to
> > exist.
> 
> At one point John was postulating a version
> of an AGI, e.g. version 3141592 which was
> a direct descendant of version 3141591. I
> took him to mean that the former was solely
> designed by the latter, and was *not* the
> result of an evolutionary process. So I 
> contended that 3141592---as well as all
> versions way back to 42, say---as products
> of truly *intelligent design* need not have 
> the full array of emotions.  Like Stathis, I
> supposed that perhaps 3141592 and all its
> predecessors might have been focused, say,
> on solving physics problems. 
> 
> (On the other hand I did affirm that if a
> program was the result of a free-for-all
> evolutionary process, then it likely would
> have a full array of emotions---after all, 
> we and all the higher animals have them.
> Besides, it makes good evolutionary 
> sense. Take anger, for example. In an
> evolutionary struggle, those programs
> equipped with the temporary insanity
> we call "anger" have a survival advantage.)
> 
> > I suppose it's *possible* that a generic
> > self-improving AI, as it expands its knowledge and
> > intelligence, could innocuously "drift" into
> coding a
> > script that would provide emotions
> *after-the-fact*
> > that it had been written.
> 
> :-)  I don't even agree with going *that* far!
> A specially crafted AI---again, not an
> evolutionarily
> derived one, but one the result of *intelligent
> design*
> (something tells me I am going to be sorry for using
> that exact phase)---cannot any more drift into
> having emotions than in can drift into sculpting
> David out of a slab of stone.  Or than over the
> course of eons a species can "drift" into having
> an eye:  No!  Only a careful pruning by mutuation
> and selection can give you an eye, or the ability
> to carve a David.
> 
> > But that will *not* be an *emotionally-driven*
> > action to code the script, because the AI will
> > not have any emotions to begin with (unless they
> > are intentionally programmed in by humans).
> 
> I would less this pass without comment, except
> that in all probability, the first truly sentient
> human-
> level AIs will very likely be the result of
> evolutionary
> activity.  To wit, humans set up conditions in which
> a lot of AIs can breed like genetic algorithms, 
> compete against each other, and develop whatever
> is best to survive (and so in that way acquire
> emotion).
> Since this is *so* likely, it's a mistake IMHO to 
> omit mentioning the possibility.
> 
> > That's why it's important to get its starting
> > "motivations/directives" right, because if
> > they aren't the AI mind could "drift" into
> > a lot of open territory that wouldn't be
> > good for us, or itself. Paperclip style.
> 
> I would agree that the same cautions that
> apply to nanotech are warranted here. 
> To the degree that an AI---superhuman
> AGI we are talking about---has power,
> then by our lights it could of course drift
> (as you  put it) into doing things not to
> our liking.
> 
> Lee
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
>
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 



       
____________________________________________________________________________________
Get the Yahoo! toolbar and be alerted to new email wherever you're surfing.
http://new.toolbar.yahoo.com/toolbar/features/mail/index.php



More information about the extropy-chat mailing list