[extropy-chat] Eugen Leitl on AI design

paul.bridger paul.bridger at paradise.net.nz
Thu Jun 3 05:15:20 UTC 2004


Unfortunately, an AI does not have to be actively malevolent to destroy 
humanity. If an AI were simply completely neutral to us, then we would still 
be in huge danger.

Ask yourself, do you consider that sandwich you are munching on to be a moral 
node? No, you don't. You consider it to be fuel.

You may argue that whereas your sandwich is no value to you intact, humans 
can help an AI and so are valuable in human form. However, we're talking 
about a self-improving singularity-style AI, which would quickly dwarf human 
capabilities and have no need for us to help it think.

AI Friendliness must be engineered, because simple indifference would turn us 
into lunch.

Zero Powers wrote:

> ----- Original Message ----- 
> From: "Eliezer Yudkowsky" <sentience at pobox.com>
> 
> 
>>Eugen Leitl wrote:
>>
>>
>>>On Wed, Jun 02, 2004 at 08:21:29AM -0400, Eliezer Yudkowsky wrote:
>>>
>>>
>>>>wondering why you think you can give hardware estimates for intelligence
>>>>when you claim not to know how it works.  I used to do that too, convert
>>>>synaptic spikes to floating-point ops and so on.  Later I looked back on
> 
> my
> 
>>>>calculations of human-equivalent hardware and saw complete gibberish,
>>>>blatantly invalid analogies such as Greek philosophers might have used
> 
> for
> 
>>>>lack of any grasp whatsoever on the domain.  People throw hardware at AI
>>>>because they have absolutely no clue how to solve it, like Egyptian
>>>>pharaohs using mummification for the cryonics problem.
>>>
>>>Many orders of magnitude more performance is a poor man's substitute for
>>>cleverness, by doing a rather thorough sampling of a lucky search space.
>>
>>Right.  But it automatically kills you.  Worse, you have to be clever to
>>realize this.  This represents an urgent problem for the human species,
> 
> but
> 
>>at least I am not personally walking directly into the whirling razor
>>blades, now that I know better.
> 
> 
> Eli
> 
> You seem pretty certain that, unless friendliness designed into it from the
> beginning, the AI will default to malevolence.  Is that your thinking?  If
> so what do you base it on?  Is it a mathematical certainty kind of thing, or
> just a hunch?  Given our planet's history it makes sense to assume
> the world is cruel and out to get you, but I'm not so certain that default
> assumption should/would apply to an AI.
> 
> Why, you say?  Glad you asked.  Life as we know it is a game of organisms
> attempting to maximize their own fitness in a world of scarce
> resources.  Since there are never enough resources (food, money, property,
> what-have-you) to go around, the "kill or be killed" instinct is inherent in
> virtually all lifeforms.  That is obvious.
> 
> But would that necessarily be the case for an AI?  Certainly your
> AI would have no need for food, money, real estate or beautiful women.
> What resources would an AI crave?  Electrical power?  Computing power?
> Bandwidth?  Would those resources be best attained by destroying man or
> working with him (at best) or ignoring him (at worst).  What would the AI
> gain by a _Terminator_ style assault on the human race?  I don't see it.
> 
> I guess what I'm asking is where would the interests of your AI conflict
> with humanity's interests such that we would have reason to fear being
> thrust into the "whirling razor blades?"
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo/extropy-chat
> 
> 



More information about the extropy-chat mailing list