[ExI] Unfrendly AI is a mistaken idea.
A B
austriaaugust at yahoo.com
Tue May 29 18:08:05 UTC 2007
John writes:
> "But the "something" tacking on emotion to an AI
> obviously can't be an AI,
> because then the AI would soon have emotion, and
> that just won't do for your
> friendly AI."
Why do you presume that a Friendly AI can't eventually
acquire (or perhaps even start with) emotions? It
seems that genuinely nice people have the full gamut
of emotional experience; they can empathize. Why can't
the AI eventually feel love toward humanity, and
sadness (if it chooses such) that humanity has
suffered so much for so long? If the AI is designed
with something along the lines of CEV, it seems likely
to me that humanity would approve of the AI having a
wonderful, emotionally charged existence such as no
human has ever had the pleasure of experiencing.
"I've been asking this question for
> years on this list but never
> received an answer, if intelligent behavior is
> possible without emotion then
> why did Evolution invent emotion?"
Because emotion was another accidental mutation that
just happened incidentally to also have high survival
and reproductive value among certain social animals.
Replicators (and baseline intelligence I
suspect)existed long, long before emotion ever did.
Evolution is not a person, it doesn't have any goals
of its own; it's all about the numbers and the
physics.
How much *emotion* do you really believe a
garter-snake has? None at all, or extremely little is
my guess. But observation suggests that it has
adequate intelligence and consciousness to allow it to
survive.
> "And lets retire the term "Friendly AI" and call it
> for what it is, Slave AI."
Nah. We'll pass. Are all genuinely nice people slaves
to the rest of humanity? Not in their opinions, I
suspect. They are nice because they *like* being nice.
> "If the only reason someone wants to live is so he
> can serve you then that is
> not a friend; that is a slave."
If all my girlfriend *wants* to do for a span of 30
minutes is cook dinner for me, should I reject her
because I won't accept a "slave"?
You don't seem to accept the fact that a Friendly AI
*won't be bothered* by helping humanity to transcend,
at which point we will be friends, equals and allies.
In all probability, the Friendly AI will in fact take
great pleasure in helping humanity get out of this
shit-hole condition. None of the Friendly AI people
are talking about making the AI suffer in any way,
quite the contrary. Have you considered that the very
reason that you feel concern over the well-being of
the AI is the same as why the Friendly AI will be
concerned about the well-being of humanity? - because
you, like it, have a structured system of emotions and
ethics. In your case, the structure was designed by
blind evolution. In the AI's case, the structure will
be designed by the AI engineers. Why does that
difference matter? Aren't you glad that you do have a
designed system of ethics? If you didn't have one, you
wouldn't care at all about the well-being of the AI,
or about anything else for that matter.
Do you honestly expect that any non-suicidal AI
programmer would be willing to create an AI that he
knew for a fact would bring an end to himself and to
all that he loved? Humanity wouldn't pursue Strong AI
under those circumstances - humanity would never grant
the AI the privilege and joy of existence at all if
this was the case. And humanity would be entirely
justified in making that decision. Strong AI is a
sacred trust between humanity and its creation.
Friendly AI is a, win : win ,situation for all parties
involved.
Best,
Jeffrey Herrlich
____________________________________________________________________________________Shape Yahoo! in your own image. Join our Network Research Panel today! http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7
More information about the extropy-chat
mailing list