[ExI] Unfrendly AI is a mistaken idea.

A B austriaaugust at yahoo.com
Wed May 30 18:00:40 UTC 2007



>From an evolutionary perspective, it seems almost
certain that consciousness evolved *before* emotion.
Of what survival or reproductive value would emotion
be if the organism could not perceive it in any way
because it had no consciousness? Consciousness must
have come first. And meaningful and useful (useful for
survival and reproductive purposes) consciousness
requires a baseline intelligence. You might try to
claim that somehow emotion and intelligence evolved
simultaneously. But that seems astronomically
improbable. For how many thousands of random mutations
does a single one confer a single significant
advantage? I don't know the number, but it must be
huge. 

Do you claim that any system that embodies an
algorithm must have an "intelligence", and therefore
has emotion? 

If I make a splash in a bucket of water, an algorithm
(based on the laws of physics and the configuration of
the molecules) conveys the effect of the event
throughout all the water molecules. Are you saying
that the water felt an emotion when I made the splash?

And if emotion is the basis for consciousness and
intelligence as you claim, then why does one have
peaks and valleys of emotional activity throughout the
day? Shouldn't consciousness be saturated constantly
by intense emotions in this case? I actually tend to
block emotions to some degree when I focus intently on
something. Not always though, as you may have noticed.

Do you "feel" yourself thinking? No. Consciousness and
emotion are more like end-products (conscious
experience being the final "product"), and intelligent
processing is more like the construction stage.

> "I do, Hugo de Garis said it."

What video are you referring to? "Building Gods"? - If
so, I don't believe that was sponsored by SIAI.

> "Full speed ahead in making a AI, after all If we
> don't somebody else will,
> just don't waste time with friendly part you'll just
> be spinning your
> wheels. Flesh and blood human being as we now know
> them will be extinct in
> less than a century anyway, but if we're very lucky
> the AI may let us evolve
> into something better; if we make the first AI the
> odds of that happening
> are a little bit better than if somebody else does.
> And if the AI decides to
> kill us all at least we'll go out in style."

You may be surprised to hear that I actually agree
with you ... *sort of*. We do need to pursue AI
quickly. Not only for existential risk reasons but
because hundreds of thousands of people are dieing
every day, and millions more are continuing to suffer.
And that's not even considering all the other animals.
But there are some things in this Universe that I and
many others love. Can you blame someone for wanting to
protect what they love? We need to pursue AI
vigorously, but we need to be as careful as we can be
at the same time. I don't believe that the
paperclip-AI equivalent is *likely* to come about, but
I do believe that it is a real, serious possiblity, if
we aren't careful. And you may not beleive it, for
some reason that I can't identify with, but I honestly
believe that a Friendly AI design will lead to the
best outcome, not only for humanity, but also for the
AI. And I'm going to continue to do what I can to
support SIAI. 


Besides, if we are all shared software running on the
same system, how is anyone "disadvantaged"? Hell, we
might even decide to all converge into a single
"optimal" mind. I'd agree to it.

Sincerely,

Jeffrey Herrlich 

  



 
____________________________________________________________________________________
TV dinner still cooling? 
Check out "Tonight's Picks" on Yahoo! TV.
http://tv.yahoo.com/



More information about the extropy-chat mailing list