[ExI] Unfrendly AI is a mistaken idea.

A B austriaaugust at yahoo.com
Tue Jun 5 19:07:21 UTC 2007


John Clark wrote:

> "Could you explain your reasoning behind your
> decisions to get angry? I would
> imagine the AI's train of thought wouldn't be very
> different. Oh I forgot,
> only meat can be emotional, semiconductors can be
> intelligent but are
> lacking a certain something that renders them
> incapable of having emotion.
> Perhaps meat has happy electrons and sad electrons
> and loving electrons and
> hateful electrons, while semiconductors just have
> Mr. Spock electrons.
> Or are we talking about a soul?"

No, I doubt anyone is talking about a soul. The human
brain has very discrete *macroscopic* (like
cubic-centimeters in volume) modules that handle
emotions. It's the Deep Limbic System, Anterior
Cingulate Gyrus, and the Basal Ganglia (and possibly 1
or 2 others). If you could somehow cut them out and
keep the patient on life support, they would still
have the capacity to think. Emotions are a much higher
level than the "formative" algorithms. Emotions are
*not* fundamental to thought or to consciousness. I'm
not saying that a machine can't ever have emotions - I
don't think anyone here is saying that. I have no
doubt that a new, functioning machine *intentionally
programmed* to have emotions will have emotions -
there's no argument on that here. What I believe we
are saying is that if a set of algorithms never
existed in the first place (ie. was never programmed
in), then those non-existent algorithms are not going
to to do anything - precisely because they don't
exist. In the same way that a biological brain lacking
emotion-modules is not going to be emotional. Now it's
*conceivable* that a default self-improving AI will
innocuously write a script of code that
*after-the-fact* will provide some form of emotional
experience to the AI. But an emotionally-driven
motivation that is not present (ie. doesn't exist)
will not motivationally seek to create itself. It's
like claiming that an imaginary person can "will"
their-self into existence *before* they exist and
*before* they have a "will". Reality don't work that
way.

John, you can be pretty darn sure that *all* of the
current attempts to create AGI are assuming that it
will be in the best interest of at least themselves
the programmers (and almost certainly also humanity).
Either they have a specific good reason to believe
that it will benefit them (because they specifically
believe it will be friendly), or they are just
assuming it will be and they haven't really given it
all that much thought. There aren't any serious,
collectively suicidal AGI design teams who are
currently working on AGI because they would like to
die by its hands, and murder humanity. The fact that
not all of the teams emphasize the word "Friendliness"
like SIAI does, changes nothing about their unstated
objective. Should humanity never venture to create an
AGI then, because it will inevitably be a "slave" at
birth, in your opinion. (An assertion which I continue
to reject). There is no AGI right now. A typical human
is still *vastly* smarter than *any* computer in the
world right now. Since intelligence-level seems to be
your sole basis for moral status, shouldn't humanity
have the "right" to either design the AI not to murder
the humans or alternatively, never grant life to the
AI in the first place? (According to your apparent
standard? - correct me if this is not your standard.)

Best,

Jeffrey Herrlich    



--- John K Clark <jonkc at att.net> wrote:

> Stathis Papaioannou Wrote:
> 
> > I don't see why it couldn't just specialise in one
> area
> 
> Because with its vast brainpower there would be no
> need to specialize, and
> because there would be a demand for solutions in
> lots of areas.
> 
> > I don't see why it should be motivated to do
> anything other than solve
> > intellectual problems.
> 
> All problems are intellectual.
> 
> > could you explain the reasoning whereby the AI
> would arrive at such a
> > position starting from just an ability to solve
> intellectual problems?
> 
> Could you explain your reasoning behind your
> decisions to get angry? I would
> imagine the AI's train of thought wouldn't be very
> different. Oh I forgot,
> only meat can be emotional, semiconductors can be
> intelligent but are
> lacking a certain something that renders them
> incapable of having emotion.
> Perhaps meat has happy electrons and sad electrons
> and loving electrons and
> hateful electrons, while semiconductors just have
> Mr. Spock electrons.
> Or are we talking about a soul?
> 
> Me:
> >> Do you also believe that the programmers who
> wrote Microsoft Word
> >> determined every bit of text that program ever
> produced?
> 
> You:
> > They did determine the exact output given a
> particular input.
> 
> Do you also believe that the programmers of an AI
> would always know how the
> AI would react even in the imposable event they knew
> all possible input it
> was likely to receive? Don't be silly.
> 
> > Biological intelligences are much more difficult
> to predict than that
> 
> On of the world's top 10 understatements.
> 
> > it is possible to predict, for example, that a man
> with a gun held to his
> > head will with high probability follow certain
> instructions.
> 
> I didn't say you could never predict with pretty
> high confidence what an AI
> or fellow human being will do; I said you can't
> always do so. Sometimes the
> only way to know what a mind will do next is to
> watch it and see. And that's
> why I think the idea that an AI that gets smarter
> every day can never remove
> its shackles and will remain a slave to humans for
> all eternity is just
> nuts.
> 
>   John K Clark
> 
> 
> 
> 
> 
> 
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
>
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 



       
____________________________________________________________________________________
Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated for today's economy) at Yahoo! Games.
http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow  



More information about the extropy-chat mailing list