[ExI] Unfrendly AI is a mistaken idea.

Eugen Leitl eugen at leitl.org
Tue Jun 12 11:19:57 UTC 2007


On Tue, Jun 12, 2007 at 08:32:35PM +1000, Stathis Papaioannou wrote:

>    Humans do extremely complex and dangerous things, such as build and
>    run nuclear power plants, where just one thing going wrong might lead
>    to disaster. The level of precautions taken has to be consistent with
>    the probability of something going wrong and the negative consequences
>    should that probability be realised. If there is even a small
>    probability of destroying the Earth then maybe that line of endeavour
>    is one that should be avoided.

See you're doing it again. ...should be avoided...
How about that ...making money or ...breathing ...should
be avoided...? Strictly no violations allowed.
 
>    Don't do anything unless it is specifically requested. Stop doing

That assumes I'm going to listen, be truthful, or accurate, or you'd care 
about doing inverse kinematics in your head so that manipulator won't
poke you in the eye by mistake.

>    whatever it is doing when that is specifically requested. Spell out

What about you don't understand what the system is doing, do not understand
the implications, or the system is not going to stop?

>    the expected consequences of everything it is asked to do, together
>    with probabilities, and update the probabilities at each point when a
>    decision that affects the outcome is made, or more frequently as

That's not bad, assuming you care, understand it, it's going to comply,
be truthful, or accurate.

>    directed. The person it is taking directions from is an appropriately
>    identified human or another AI, ultimately responsible to a human up

What is a human? How do you identify something as a human? What about
a human that explicitly tells me to build a system that is not subject
to any of the above restrictions? How about a human that builds that
system quite directly, and is done sooner than you with your brittle
Rube Goldberg device?

>    the chain of command.

Top-down never works.

>    If you call a plumber to unblock your drain, you want him to be an
>    expert at plumbing, to be able to understand your problem, to present

If I want a system to clothe, feed and entertain a family, and
not be bothered with implementation details, would that work, long-term?

>    to you the various choices available in terms of their respective
>    merits and demerits, to take instructions from you (including the
>    instruction "just unblock it however you think is best", if that's
>    what you say), to then carry the task out in as skilful a way as
>    possible, to pause halfway if you ask him to for some reason, and to
>    be polite and considerate towards you at all times. You don't want him

You understand plumbing. Do you understand high-energy physics,
orbital mechanics, machine-phase chemistry, toxicology, and nonlinear 
system dynamics? The system is sure going to have a bit of 'splaining to do.
It's sure nice to have a wide range of choices, especially if one
doesn't understand a single thing about any of them.

>    to be driven by greed, or distracted because he thinks he's too smart
>    to be fixing your drains, or to do a shoddy job and pretend it's OK so
>    that he gets paid. A human plumber will pretend to have the qualities
>    of the ideal plumber, but of course we know that there will be the
>    competing interests at play. Do believe that an AI smart enough to be
>    a plumber would *have* to have all these other competing interests? In

I believe nobody who can go on two legs can make a system which 
is such an ideal plumber.

>    other words that emotions such as pride, anger, greed etc. would arise
>    naturally out of a program at least as competent as a human at any
>    given task?

How do you write a program as competent as a human? One line at the time, sure.
All 10^17 of them.



More information about the extropy-chat mailing list