[extropy-chat] Eugen Leitl on AI design

Zero Powers zero_powers at hotmail.com
Thu Jun 3 16:07:49 UTC 2004


Hmmm.  I still don't get it.  Even if we are insignificant by comparison I
still don't see *why* an AI would "turn us into lunch."  What would it get
out of it?  For instance pigeons are insignificant to us.  Aside from the
occasional delinquent with a BB gun, there is no widespread human assault on
pigeons.  Moreover, pigeons are so stupid compared to our middling
intelligences that we have *no* effective means of communicating with them.
So if there were to arise a conflict between our interests and the pigeon
population the only way of negotiating a resolution would be to wipe them
out (or forcibly relocate them).

With us, as stupid as we would be compared to your AI, there would still be
some reasonable means of communication and negotiation.  And even if
communicating with our slow as molasses brains proved to be more than the AI
could bear, I still don't see where the conflict is.  Would violating our
rights somehow be of benefit to an AI?  Would they need us for batteries ala
_The Matrix_?  Would they get tired of us using up bandwidth?

I don't know, it just seems obvious to me that if the AI were powerful
enough to pose any sort of credible threat to our welfare, it would surely
be powerful enough to solve any problems of energy and bandwidth without
causing us any inconvenience.

----- Original Message ----- 
From: "paul.bridger" <paul.bridger at paradise.net.nz>
To: "Zero Powers" <zero_powers at hotmail.com>; "ExI chat list"
<extropy-chat at lists.extropy.org>
Sent: Wednesday, June 02, 2004 10:15 PM
Subject: Re: [extropy-chat] Eugen Leitl on AI design


> Unfortunately, an AI does not have to be actively malevolent to destroy
> humanity. If an AI were simply completely neutral to us, then we would
still
> be in huge danger.
>
> Ask yourself, do you consider that sandwich you are munching on to be a
moral
> node? No, you don't. You consider it to be fuel.
>
> You may argue that whereas your sandwich is no value to you intact, humans
> can help an AI and so are valuable in human form. However, we're talking
> about a self-improving singularity-style AI, which would quickly dwarf
human
> capabilities and have no need for us to help it think.
>
> AI Friendliness must be engineered, because simple indifference would turn
us
> into lunch.
>
> Zero Powers wrote:
>
> > ----- Original Message ----- 
> > From: "Eliezer Yudkowsky" <sentience at pobox.com>
> >
> >
> >>Eugen Leitl wrote:
> >>
> >>
> >>>On Wed, Jun 02, 2004 at 08:21:29AM -0400, Eliezer Yudkowsky wrote:
> >>>
> >>>
> >>>>wondering why you think you can give hardware estimates for
intelligence
> >>>>when you claim not to know how it works.  I used to do that too,
convert
> >>>>synaptic spikes to floating-point ops and so on.  Later I looked back
on
> >
> > my
> >
> >>>>calculations of human-equivalent hardware and saw complete gibberish,
> >>>>blatantly invalid analogies such as Greek philosophers might have used
> >
> > for
> >
> >>>>lack of any grasp whatsoever on the domain.  People throw hardware at
AI
> >>>>because they have absolutely no clue how to solve it, like Egyptian
> >>>>pharaohs using mummification for the cryonics problem.
> >>>
> >>>Many orders of magnitude more performance is a poor man's substitute
for
> >>>cleverness, by doing a rather thorough sampling of a lucky search
space.
> >>
> >>Right.  But it automatically kills you.  Worse, you have to be clever to
> >>realize this.  This represents an urgent problem for the human species,
> >
> > but
> >
> >>at least I am not personally walking directly into the whirling razor
> >>blades, now that I know better.
> >
> >
> > Eli
> >
> > You seem pretty certain that, unless friendliness designed into it from
the
> > beginning, the AI will default to malevolence.  Is that your thinking?
If
> > so what do you base it on?  Is it a mathematical certainty kind of
thing, or
> > just a hunch?  Given our planet's history it makes sense to assume
> > the world is cruel and out to get you, but I'm not so certain that
default
> > assumption should/would apply to an AI.
> >
> > Why, you say?  Glad you asked.  Life as we know it is a game of
organisms
> > attempting to maximize their own fitness in a world of scarce
> > resources.  Since there are never enough resources (food, money,
property,
> > what-have-you) to go around, the "kill or be killed" instinct is
inherent in
> > virtually all lifeforms.  That is obvious.
> >
> > But would that necessarily be the case for an AI?  Certainly your
> > AI would have no need for food, money, real estate or beautiful women.
> > What resources would an AI crave?  Electrical power?  Computing power?
> > Bandwidth?  Would those resources be best attained by destroying man or
> > working with him (at best) or ignoring him (at worst).  What would the
AI
> > gain by a _Terminator_ style assault on the human race?  I don't see it.
> >
> > I guess what I'm asking is where would the interests of your AI conflict
> > with humanity's interests such that we would have reason to fear being
> > thrust into the "whirling razor blades?"



More information about the extropy-chat mailing list