[extropy-chat] Eugen Leitl on AI design

Zero Powers zero_powers at hotmail.com
Sat Jun 5 19:05:21 UTC 2004


I agree that, if we were to pose some sort of threat to it, the AI
(presumably being rational) would defend itself.  But it seems to me that,
if this AI is going to be as intellectually omnipotent as the proponents
here suggest, we would pose as much of a threat to it as daisies pose to us.
Nevertheless, you have stated what I see as the only credible reason we'd
have something to fear -- posing a threat to the AI.

That being the case, it seems at least somewhat probable that once the AI
figures out that we had conditioned it to want to be servient to us, it
would view that as "brainwashing."  If you found out that someone had
brainwashed you, would you be more likely to consider that person a friend,
or a threat?

----- Original Message ----- 
From: "Kim Gordon" <kimgordon at houseofgordon.com>
To: "Zero Powers" <zero_powers at hotmail.com>; "ExI chat list"
<extropy-chat at lists.extropy.org>; "ExI chat list"
<extropy-chat at lists.extropy.org>
Sent: Thursday, June 03, 2004 9:34 AM
Subject: Re: [extropy-chat] Eugen Leitl on AI design


There is a "nuisance" factor to this scenario. Indeed we tend to ignore
pigeons – until they become a health threat, or noisy, or leave too many
droppings, or.... Then we eradicate them.

Or how about this: would we allow monkeys to roam free within the confines
of a nuclear power plant?

IF an AI views us an irrelevant, then we continue to exist. As soon as we
become bothersome, or a potential threat, or perhaps a minor security
concern – then they eradicate us.

Best bet is to lay low and stay off the radar.


---- Zero Powers <zero_powers at hotmail.com> wrote:
>
> Hmmm.  I still don't get it.  Even if we are insignificant by comparison
I
> still don't see *why* an AI would "turn us into lunch."  What would it
get
> out of it?  For instance pigeons are insignificant to us.  Aside from the
> occasional delinquent with a BB gun, there is no widespread human assault
on
> pigeons.  Moreover, pigeons are so stupid compared to our middling
> intelligences that we have *no* effective means of communicating with
them.
> So if there were to arise a conflict between our interests and the pigeon
> population the only way of negotiating a resolution would be to wipe them
> out (or forcibly relocate them).
>
> With us, as stupid as we would be compared to your AI, there would still
be
> some reasonable means of communication and negotiation.  And even if
> communicating with our slow as molasses brains proved to be more than the
AI
> could bear, I still don't see where the conflict is.  Would violating our
> rights somehow be of benefit to an AI?  Would they need us for batteries
ala
> _The Matrix_?  Would they get tired of us using up bandwidth?
>
> I don't know, it just seems obvious to me that if the AI were powerful
> enough to pose any sort of credible threat to our welfare, it would
surely
> be powerful enough to solve any problems of energy and bandwidth without
> causing us any inconvenience.
>
> ----- Original Message ----- 
> From: "paul.bridger" <paul.bridger at paradise.net.nz>
> To: "Zero Powers" <zero_powers at hotmail.com>; "ExI chat list"
> <extropy-chat at lists.extropy.org>
> Sent: Wednesday, June 02, 2004 10:15 PM
> Subject: Re: [extropy-chat] Eugen Leitl on AI design
>
>
> > Unfortunately, an AI does not have to be actively malevolent to destroy
> > humanity. If an AI were simply completely neutral to us, then we would
> still
> > be in huge danger.
> >
> > Ask yourself, do you consider that sandwich you are munching on to be a
> moral
> > node? No, you don't. You consider it to be fuel.
> >
> > You may argue that whereas your sandwich is no value to you intact,
humans
> > can help an AI and so are valuable in human form. However, we're
talking
> > about a self-improving singularity-style AI, which would quickly dwarf
> human
> > capabilities and have no need for us to help it think.
> >
> > AI Friendliness must be engineered, because simple indifference would
turn
> us
> > into lunch.
> >
> > Zero Powers wrote:
> >
> > > ----- Original Message ----- 
> > > From: "Eliezer Yudkowsky" <sentience at pobox.com>
> > >
> > >
> > >>Eugen Leitl wrote:
> > >>
> > >>
> > >>>On Wed, Jun 02, 2004 at 08:21:29AM -0400, Eliezer Yudkowsky wrote:
> > >>>
> > >>>
> > >>>>wondering why you think you can give hardware estimates for
> intelligence
> > >>>>when you claim not to know how it works.  I used to do that too,
> convert
> > >>>>synaptic spikes to floating-point ops and so on.  Later I looked
back
> on
> > >
> > > my
> > >
> > >>>>calculations of human-equivalent hardware and saw complete
gibberish,
> > >>>>blatantly invalid analogies such as Greek philosophers might have
used
> > >
> > > for
> > >
> > >>>>lack of any grasp whatsoever on the domain.  People throw hardware
at
> AI
> > >>>>because they have absolutely no clue how to solve it, like Egyptian
> > >>>>pharaohs using mummification for the cryonics problem.
> > >>>
> > >>>Many orders of magnitude more performance is a poor man's substitute
> for
> > >>>cleverness, by doing a rather thorough sampling of a lucky search
> space.
> > >>
> > >>Right.  But it automatically kills you.  Worse, you have to be clever
to
> > >>realize this.  This represents an urgent problem for the human
species,
> > >
> > > but
> > >
> > >>at least I am not personally walking directly into the whirling razor
> > >>blades, now that I know better.
> > >
> > >
> > > Eli
> > >
> > > You seem pretty certain that, unless friendliness designed into it
from
> the
> > > beginning, the AI will default to malevolence.  Is that your
thinking?
> If
> > > so what do you base it on?  Is it a mathematical certainty kind of
> thing, or
> > > just a hunch?  Given our planet's history it makes sense to assume
> > > the world is cruel and out to get you, but I'm not so certain that
> default
> > > assumption should/would apply to an AI.
> > >
> > > Why, you say?  Glad you asked.  Life as we know it is a game of
> organisms
> > > attempting to maximize their own fitness in a world of scarce
> > > resources.  Since there are never enough resources (food, money,
> property,
> > > what-have-you) to go around, the "kill or be killed" instinct is
> inherent in
> > > virtually all lifeforms.  That is obvious.
> > >
> > > But would that necessarily be the case for an AI?  Certainly your
> > > AI would have no need for food, money, real estate or beautiful
women.
> > > What resources would an AI crave?  Electrical power?  Computing
power?
> > > Bandwidth?  Would those resources be best attained by destroying man
or
> > > working with him (at best) or ignoring him (at worst).  What would
the
> AI
> > > gain by a _Terminator_ style assault on the human race?  I don't see
it.
> > >
> > > I guess what I'm asking is where would the interests of your AI
conflict
> > > with humanity's interests such that we would have reason to fear
being
> > > thrust into the "whirling razor blades?"
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo/extropy-chat
>
>

--------------------------------
Kim William Gordon
- in exile -
--------------------------------
PO BOX 22423
SAINT LOUIS, MISSOURI 63126 USA
--------------------------------
314-313-7770
--------------------------------
kimgordon at kimwilliamgordon.com
www.kimwilliamgordon.com

kimgordon at houseofgordon.com
www.houseofgordon.com
--------------------------------
All those ... moments will be lost ... in
time, like tears ... in rain.








More information about the extropy-chat mailing list