[ExI] Unfrendly AI is a mistaken idea.

Lee Corbin lcorbin at rawbw.com
Sat May 26 19:12:14 UTC 2007


John Clark writes

> Lee Corbin wrote:
>
>> Why would the machine have such considerations if they were not
>> explicitly or implicitly programmed in?
>
> Dear God, you're not going to bring up that old cliché that a computer can
> only do what it's programmed to do are you?

Hmm, don't know.  Clearly artificial machines do many things
never *explicitly* programmed into them, e.g., find novel proofs.
But they do not engage in extremely focused behavior (e.g. writing
poetry) unless someone or something has crafted this into them,
or it evolved under selection pressure.

What I meant to say (and thought that
I did say) was that some of the things you suggest it will do, e.g.,
"find certain things stupid", or have emotions, or have certain
agendas, or experience boredom, are not the simple, rudimentary
obvious things that we intuit them to be.  Each is an extremely finely
crafted kind of behavior, a behavior that will never simply arise
in the absence a vicious evolutionary struggle.  But maybe you are
talking about an evolutionary struggle (see below)?

(I do admit that if the AIs you are talking about are indeed the result
of a protracted evolutionary struggle, then, yes, they'll have the same
kinds of agendas that we do, including self-survival, dominance, war,
everything, probably even including a form of sex.)

But if your Versions 347812 and its direct descendant Version 347813
are just refining initial goals laid down way back, either by humans or
by the great Version 42---and so are thereby NOT participating in an
all-against-all Hobbesian/Darwinian struggle,  then there is no necessity
for them to have emotions, or for them to ever be bored, or consider
something stupid, or develop agendas of their own.

You also wrote answering Stathis

> > There is no *logical* reason why a computer should prefer to work on
> > non-contradictory propositions.
>
> The evolutionary principles that Darwin enumerated will not be repealed even
> for an electronic AI. If the AI has no desire to avoid getting into an
> infinite loop then it will be of no use to us, or itself, or anybody else...
> If it doesn't refuse to obey the order "prove or disprove the continuum
> hypothesis using conventional Zermelo-Fraenkel set theory" then its infinite
> loop time and all your AI does is consume energy and radiate heat.

I have found a way to agree with both you and Stathis  :-)

As Stathis implied, the AI could be both set to work on the implications
of CH and also separately to work in the implications of ~CH. In fact,
many mathematicians do that already.  But I agree with you that one of
the early innovations people and Version 42 will add is to never fall into
energy wasting infinite loops.

Lee




More information about the extropy-chat mailing list