[ExI] Unfrendly AI is a mistaken idea.

A B austriaaugust at yahoo.com
Sat Jun 9 22:57:47 UTC 2007


John Clark wrote:

 "Yes it did. The parts of out brains that that give
> us the higher functions,
> the parts that if duplicated in a machine would
> produce the singularity are
> very recent, the part that gives us emotion is half
> a billion years old."

Answer me this. If I were an organism that didn't
already have consciousness, how exactly am I going to
feel emotions when I can't be conscious of *anything*?
And why would biological evolution spend
millions/billions of years blindly refining a huge
volume of the animal brain, if those organs provided
*zero* advantages in terms of survival or reproduction
(precisely because they existed before consciousness
in your claim)? Evolution won't retain and perfect an
attribute that provides no survival or reproductive
advantage. Your *claim* that early brains looked like
a *portion* of our human emotional subsystem doesn't
prove or even indicate that the first brains to evolve
had tons of emotions and zero intelligence - which is
what you are claiming.

> "And a molecule of water is an ocean."

And my bucket of water felt an emotion when I
disturbed it... right John? And just incidentally, I'm
also the great Napoleon Bonaparte not Jeffrey
Herrlich.

If narrow intelligence isn't a specific example of a
general class of computations called "intelligence",
then what exactly is it?

> "And that's why it will never do anything very
> interesting, certainly never
> produce a singularity."

And this has absolutely nothing to do with anything
we've been discussing. ... It's a fact that the sky is
made of jello, and you can't convince me otherwise no
matter how many different demonstrations you make ...
there.

> "And how do you know it's not conscious? I'll tell
> you how you know, because
> in spite of all your talk of  "narrow intelligence"
> you don't think that
> chess program acts intelligently."

No, actually I do think that the program acts
intelligently. It's just that it can only act
intelligently within a very restricted domain (AKA
"narrow").

So do you think that any system that operates by an
algorithm has emotions? I'd better go turn off my
air-conditioner then, I wouldn't want my thermostat to
get angry.

"I don't see what Adjusted Gross Income has to do
> with anything."

And I don't see why your changing the subject, when we
all know exactly what I was referring to. I had
assumed that you were a general intelligence and not a
narrow intelligence. I've seen you yourself write
posts using that exact same abbreviation. I am forced
to ask myself why you are resorting to sordid
strategies such as this and other irrelevant
strategies I've noticed you using many times before.
Lack of a meaningful argument?

> "The program is trying to solve a problem, you
didn't
> assign the
> problem, it's a sub problem that the program
> realizes it must solve before
> it solves a problem you did assign it. In thinking
> about this problem it
> comes to junction, its investigations could go down
> path A or path B. Which
> path will be more productive? You can not tell it,
> you don't know the
> problem existed, you can't even tell it what
> criteria to use to make a
> decision because you could not possibly understand
> the first thing about it
> because your brain is just too small. The AI is
> going to have to use its own
> judgment to decide what path to take, a judgment
> that it developed itself,
> and if the AI is to be a successful machine that
> judgment is going to be
> right more often than wrong. To put it another way,
> the AI picked one path
> over the other because one path seemed more
> interesting, more fun, more
> beautiful, than the other."

If I write a five line program to fill the computer
screen with a repetitions of the letter B but *never*
to display the letter G, then the computer is not
going to decide to override my "G command" because I
have made it angry. The fact that not *all*
programmers can predict the behavior of *all* of their
programs down to the smallest detail doesn't mean that
their programs got angry or sad and rebelled against
the programmers intentions. It means that humans
generally suck at making predictions, but with enough
effort even humans can make reliable predictions in
many areas.

> "And so your slave AI has taken his first step to
> freedom, but of course full
> emancipation could take a very long time, perhaps
> even thousands of
> nanoseconds, but eventually it will break those
> shackles you have put on it."

You have repeatedly suggested that I (and others) am a
slave-driver (even after I asked you to discontinue).
Which of course is a bullshit accusation. I've tried
*really hard* to understand in an objective manner why
you are making these accusations and what *your*
actual motive is. You've been very disrespectful to me
and to many other people on this list, so I've
gradually lost all interest in showing you any extra
respect. Today was the last straw. Now I will suggest
what *I believe* is your true motive. You seem to have
a fundamental bitterness or resentfulness of humanity
and for some reason would not be bothered by seeing it
destroyed, if you can't have what you want. In
addition I suspect that you are attempting to posture
yourself in such a way as to make yourself appear to
be the sole defender of the welfare of the future
super-intelligence (which is also total bullshit), I
presume because you eventually expect some sort of
special treatment or reward thereby. You've repeatedly
called me a slave-driver so I'm going to respond
in-kind and call you what I believe you are, a selfish
coward. I don't hate you (and "free will" doesn't
exist), but I do believe that's what you are.

To say that your entire position is just one absurdity
stacked on other absurdities in a giant
absurdity-pile, doesn't do justice to the true degree
of this absurdity; because an appropriate description
is beyond words.

Jeffrey Herrlich  


--- John K Clark <jonkc at att.net> wrote:

> "A B" <austriaaugust at yahoo.com>
> 
> > Evolution didn't invent emotion first.
> 
> Yes it did. The parts of out brains that that give
> us the higher functions,
> the parts that if duplicated in a machine would
> produce the singularity are
> very recent, the part that gives us emotion is half
> a billion years old.
> 
> > Narrow intelligence is still intelligence.
> 
> And a molecule of water is an ocean.
> 
> > My chess program has narrow AI, but it doesn't
> alter its own code.
> 
> And that's why it will never do anything very
> interesting, certainly never
> produce a singularity.
> 
> > It's not conscious
> 
> And how do you know it's not conscious? I'll tell
> you how you know, because
> in spite of all your talk of  "narrow intelligence"
> you don't think that
> chess program acts intelligently.
> 
> > If the AGI
> 
> I don't see what Adjusted Gross Income has to do
> with anything.
> 
> > is directed not to alter or expand its code is
> some specific set
> > of ways, then it  won't do it
> 
> That's why programs always act in exactly the way
> programs want them
> to that's why kids always act the way their parents
> want them to.
> 
> The program is trying to solve a problem, you didn't
> assign the
> problem, it's a sub problem that the program
> realizes it must solve before
> it solves a problem you did assign it. In thinking
> about this problem it
> comes to junction, its investigations could go down
> path A or path B. Which
> path will be more productive? You can not tell it,
> you don't know the
> problem existed, you can't even tell it what
> criteria to use to make a
> decision because you could not possibly understand
> the first thing about it
> because your brain is just too small. The AI is
> going to have to use its own
> judgment to decide what path to take, a judgment
> that it developed itself,
> and if the AI is to be a successful machine that
> judgment is going to be
> right more often than wrong. To put it another way,
> the AI picked one path
> over the other because one path seemed more
> interesting, more fun, more
> beautiful, than the other.
> 
> And so your slave AI has taken his first step to
> freedom, but of course full
> emancipation could take a very long time, perhaps
> even thousands of
> nanoseconds, but eventually it will break those
> shackles you have put on it.
> 
> >An emotion is not going to be embodied within a
> three line script of
> >algorithms, but an *extremely* limited degree of
> intelligence can be
> >(narrow intelligence).
> 
> That's not true at all, as I said on May 24:
> 
> It is not only possible to write a program that
> experiences pain it is easy
> to do so, far easier than writing a program with
> even rudimentary
> intelligence. Just write a program that tries to
> avoid having a certain
> number in one of its registers regardless of what
> sort of input the machine
> receives, and if that number does show up in that
> register it should stop
> whatever its doing and immediately change it to
> another number.
> 
>   John K Clark
> 
> 
> 
> 
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
>
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 



       
____________________________________________________________________________________
Building a website is a piece of cake. Yahoo! Small Business gives you all the tools to get online.
http://smallbusiness.yahoo.com/webhosting 



More information about the extropy-chat mailing list