[ExI] Unfrendly AI is a mistaken idea.

A B austriaaugust at yahoo.com
Wed Jun 6 18:07:45 UTC 2007


John Clark wrote:

> "True, and you'd never be intelligent either, you'd
> just be a few hundred
> pounds of protoplasm."

No offense John, but your intuitions about emotions
and motivations are just totally *wrong*. In how many
different ways must that be demonstrated?

> "THANK YOU!"

??? ... ??? ... AFAIK, no person within this
discussion thread has said otherwise.

> "No, a computer doesn't need emotions, but a AI must
> have them."

An AI *is* a specific computer. If my desktop doesn't
need an emotion to run a program or respond within it,
why "must" an AI have emotions? Are all of the
AI-driven characters in my videogame emotional and
"self-motivated"? Is my chess program emotional and
"self-motivated"? A non-existent motivation will not
"motivate" itself into existence. And an AGI isn't
going to pop out of thin air, it has to be
intentionally designed, or it's not going to exist.

I don't understand it John, before you were claiming
fairly ardently that "Free Will" doesn't exist. Why
are you now claiming in effect that an AI will
automatically execute a script of code that doesn't
exist - because it was never written (either by the
programmers or by the AI)? 

> "For reasons that I fully admit are unclear to me
> members of this list often
> use the word "anthropomorphic" as if it were a
> dreadful insult; but I think
> anthropomorphism is a valuable tool if used properly
> in understanding how
> other minds work."

The problem is, not all functioning minds must be even
*remotely* similar to the higher functions of a
*human* mind. That's why your anthropomorphism isn't
extending very far. The possibility-space of
functioning minds is ginormous. The only mandatory
similarity between any two designs within the space is
likely the very foundations, such as the existence of
formative algorithms, etc.

I suppose it's *possible* that a generic
self-improving AI, as it expands its knowledge and
intelligence, could innocuously "drift" into coding a
script that would provide emotions *after-the-fact*
that it had been written. But that will *not* be an
*emotionally-driven* action to code the script,
because the AI will not have any emotions to begin
with (unless they are intentionally programmed in by
humans). That's why it's important to get it's
starting "motivations/directives" right, because if
they aren't the AI mind could "drift" into a lot of
open territory that wouldn't be good for us, or
itself. Paperclip style. This needs our attention,
folks.  

I apologize in advance for the bluntness of this post,
but the other strategies don't seem to be getting
anywhere.

Best,

Jeffrey Herrlich 


--- John K Clark <jonkc at att.net> wrote:

> Stathis Papaioannou Wrote:
> 
> > I get angry because I have the sort of
> neurological hardware
> >that allows  me to get angry
> 
> I certainly can't disagree with that.
> 
> > if I didn't have that hardware, I would never get
> angry
> 
> True, and you'd never be intelligent either, you'd
> just be a few hundred
> pounds of protoplasm.
> 
> > I don't doubt that machines can have emotions,
> since I believe that the
> > human brain is Turing emulable.
> 
> THANK YOU!
> 
> > But you're suggesting that not only can computers
> have emotions, they must
> > have emotions
> 
> No, a computer doesn't need emotions, but a AI must
> have them.
> 
> > not only that, but they must have the same sorts
> of emotions and
> > motivations that people have.
> 
> I don't believe that at all; I believe many,
> probably most, emotions a AI
> would have would be inscrutable to a human being,
> that's why a AI is so
> unpredictable.
> 
> > It seems to me that this anthropomorphic position
> is more consistent with
> > a belief in the special significance of meat.
> 
> For reasons that I fully admit are unclear to me
> members of this list often
> use the word "anthropomorphic" as if it were a
> dreadful insult; but I think
> anthropomorphism is a valuable tool if used properly
> in understanding how
> other minds work.
> 
>   John K Clark
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
>
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 



       
____________________________________________________________________________________
Need a vacation? Get great deals
to amazing places on Yahoo! Travel.
http://travel.yahoo.com/



More information about the extropy-chat mailing list