[ExI] Unfrendly AI is a mistaken idea.

A B austriaaugust at yahoo.com
Fri Jun 8 19:22:23 UTC 2007


John Clark wrote:

> "Apparently Evolution was also wrong to invent
> emotion first and only after
> 500 million years come up with intelligence."

Evolution didn't invent emotion first. Intelligence
existed first, and humans aren't the first animals
with any level of intelligence.

"42."

I see that I still have a ways to go, then. ;-)

> "Then you *are* a specific computer too."

Correct.

> "If you are willing to embrace the fantasy that your
> desktop is intelligent
> then I see no reason you would not also believe in
> the much more modest and
> realistic fantasy that it is emotional. Emotions are
> easy, intelligence is
> hard."

Narrow intelligence is still intelligence. It all
works on algorithms, the desktop and my brain. Human
intelligence is hard, but animal intelligence has been
around for hundreds of millions of years beforehand.

> "I don't know why I'm claiming that either because I
> don't know what the hell
> you're talking about. Any AI worthy of the name will
> write programs for it
> to run on itself and nobody including the AI knows
> what the outcome of those
> programs will be. Even the AI doesn't know what it
> will do next, it will
> just have to run the programs and wait to see what
> it decides to do next;
> and that is the only meaning I can attach to the
> noise "free will" that is
> not complete gibberish."

My chess program has narrow AI, but it doesn't alter
its own code. It's not conscious, but it does have a
level of intelligence. If the AGI is directed not to
alter or expand its code is some specific set of ways,
then it won't do it, precisely as instructed. The
directives that we program it with will be the only
form of "motivation" that it will begin with. Needless
to say, it's important that we get those directives
right; hence the "Friendly" part.

> The problem is that a mind that is not even
> *remotely* similar to *any* of
> the *higher* functions of the human mind, that is to
> say if there is
> absolutely no point of similarity between us and
> them then that mind is not
> functioning very well. It is true that a mind that
> didn't understand
> mathematics or engineering or science or philosophy
> or economics would be of
> no threat to us, but it would be of no use  either;
> and as we have
> absolutely nothing in common with it there would be
> no way to communicate
> with it and thus no reason to build it.

There will be similarities, at the very bottom. Both
require formative algorithms. Emotion is a much
higher, macroscopic, level; and not necessary to a
functioning mind. My desktop functions pretty well,
and if I wanted, it could even help me with science
and engineering (calculation and CAD programs, etc).
Current computers help humans do a lot of things. Eg.
Moore's Law is made possible by improved computer
functionality when designing new chips. Look at the
huge range of behaviors within humanity, and that's
all within a very small sector of the total mind
possibility-space.

> "Our ancestors had emotions long ago when they begin
> their evolutionary
> journey, but that's different because, because,
> well, because meat has a
> soul but semiconductors never can. I know you don't
> like that 4 letter word
> but face it, that is exactly what you're saying."

Nope, I'm not saying that. I've specifically said that
a machine *can* have emotions. All I've said is that
no emotion will exist where there is no capacity for
emotion. And that capacity for emotion will not pop
out of thin air. It will either have to be written by
humans, or it will have to be written by the AI. The
key here is, the AI will not write the capacity for it
if it is directed not to do so. And it will not be
emotionally driven to ignore or override that
directive, precisely because it will not have any
emotions when it first comes on-line.

An emotion is not going to be embodied within a three
line script of algorithms, but an *extremely* limited
degree of intelligence can be (narrow intelligence).

Best,

Jeffrey Herrlich 



--- John K Clark <jonkc at att.net> wrote:

> "A B" <austriaaugust at yahoo.com>
> 
> > your intuitions about emotions
> > and motivations are just totally *wrong*.
> 
> Apparently Evolution was also wrong to invent
> emotion first and only after
> 500 million years come up with intelligence.
> 
> > In how many different ways must that be
> demonstrated?
> 
> 42.
> 
> > An AI *is* a specific computer.
> 
> Then you *are* a specific computer too.
> 
> > If my desktop doesn't need an emotion to run a
> program or respond within
> > it, why "must" an AI have emotions?
> 
> If you are willing to embrace the fantasy that your
> desktop is intelligent
> then I see no reason you would not also believe in
> the much more modest and
> realistic fantasy that it is emotional. Emotions are
> easy, intelligence is
> hard.
> 
>  > I don't understand it John, before you were
> claiming fairly ardently that
> > "Free Will" doesn't exist.
> 
> I made no such claim, I claimed it does not even
> have the virtue of non
> existence, as expressed by most people the noise
> "free will" is no more
> meaningful than a burp.
> 
> > Why are you now claiming in effect that an AI will
> > automatically execute a script of code that
> doesn't
> > exist - because it was never written (either by
> the
> > programmers or by the AI)?
> 
> I don't know why I'm claiming that either because I
> don't know what the hell
> you're talking about. Any AI worthy of the name will
> write programs for it
> to run on itself and nobody including the AI knows
> what the outcome of those
> programs will be. Even the AI doesn't know what it
> will do next, it will
> just have to run the programs and wait to see what
> it decides to do next;
> and that is the only meaning I can attach to the
> noise "free will" that is
> not complete gibberish.
> 
> >The problem is, not all functioning minds must be
> even  *remotely* similar
> >to the higher functions of a  *human* mind.
> 
> The problem is that a mind that is not even
> *remotely* similar to *any* of
> the *higher* functions of the human mind, that is to
> say if there is
> absolutely no point of similarity between us and
> them then that mind is not
> functioning very well. It is true that a mind that
> didn't understand
> mathematics or engineering or science or philosophy
> or economics would be of
> no threat to us, but it would be of no use  either;
> and as we have
> absolutely nothing in common with it there would be
> no way to communicate
> with it and thus no reason to build it.
> 
>  > AI will not have any emotions to begin with
> 
> Our ancestors had emotions long ago when they begin
> their evolutionary
> journey, but that's different because, because,
> well, because meat has a
> soul but semiconductors never can. I know you don't
> like that 4 letter word
> but face it, that is exactly what you're saying.
> 
>  John K Clark
> 
> 
> 
> 
> 
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
>
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 



 
____________________________________________________________________________________
8:00? 8:25? 8:40? Find a flick in no time 
with the Yahoo! Search movie showtime shortcut.
http://tools.search.yahoo.com/shortcuts/#news



More information about the extropy-chat mailing list