[extropy-chat] Eugen Leitl on AI design

Zero Powers zero_powers at hotmail.com
Thu Jun 3 04:54:32 UTC 2004


----- Original Message ----- 
From: "Eliezer Yudkowsky" <sentience at pobox.com>


> Eugen Leitl wrote:
>
> > On Wed, Jun 02, 2004 at 08:21:29AM -0400, Eliezer Yudkowsky wrote:
> >
> >>wondering why you think you can give hardware estimates for intelligence
> >>when you claim not to know how it works.  I used to do that too, convert
> >>synaptic spikes to floating-point ops and so on.  Later I looked back on
my
> >>calculations of human-equivalent hardware and saw complete gibberish,
> >>blatantly invalid analogies such as Greek philosophers might have used
for
> >>lack of any grasp whatsoever on the domain.  People throw hardware at AI
> >>because they have absolutely no clue how to solve it, like Egyptian
> >>pharaohs using mummification for the cryonics problem.
> >
> > Many orders of magnitude more performance is a poor man's substitute for
> > cleverness, by doing a rather thorough sampling of a lucky search space.
>
> Right.  But it automatically kills you.  Worse, you have to be clever to
> realize this.  This represents an urgent problem for the human species,
but
> at least I am not personally walking directly into the whirling razor
> blades, now that I know better.

Eli

You seem pretty certain that, unless friendliness designed into it from the
beginning, the AI will default to malevolence.  Is that your thinking?  If
so what do you base it on?  Is it a mathematical certainty kind of thing, or
just a hunch?  Given our planet's history it makes sense to assume
the world is cruel and out to get you, but I'm not so certain that default
assumption should/would apply to an AI.

Why, you say?  Glad you asked.  Life as we know it is a game of organisms
attempting to maximize their own fitness in a world of scarce
resources.  Since there are never enough resources (food, money, property,
what-have-you) to go around, the "kill or be killed" instinct is inherent in
virtually all lifeforms.  That is obvious.

But would that necessarily be the case for an AI?  Certainly your
AI would have no need for food, money, real estate or beautiful women.
What resources would an AI crave?  Electrical power?  Computing power?
Bandwidth?  Would those resources be best attained by destroying man or
working with him (at best) or ignoring him (at worst).  What would the AI
gain by a _Terminator_ style assault on the human race?  I don't see it.

I guess what I'm asking is where would the interests of your AI conflict
with humanity's interests such that we would have reason to fear being
thrust into the "whirling razor blades?"



More information about the extropy-chat mailing list