[ExI] Unfrendly AI is a mistaken idea.
Stathis Papaioannou
stathisp at gmail.com
Tue Jun 12 12:11:44 UTC 2007
On 12/06/07, Eugen Leitl <eugen at leitl.org> wrote:
> If you call a plumber to unblock your drain, you want him to be an
> > expert at plumbing, to be able to understand your problem, to present
>
> If I want a system to clothe, feed and entertain a family, and
> not be bothered with implementation details, would that work, long-term?
No. it would make sense to have an AI that can do all these things. Perhaps
its family would ask it to hurt others in the process, but that is no
different to the current situation where one person may go rogue and then
has to deal with all the other people in the world with whom he is in
competion; in this case, all the other humans and their AI's.
> to you the various choices available in terms of their respective
> > merits and demerits, to take instructions from you (including the
> > instruction "just unblock it however you think is best", if that's
> > what you say), to then carry the task out in as skilful a way as
> > possible, to pause halfway if you ask him to for some reason, and to
> > be polite and considerate towards you at all times. You don't want
> him
>
> You understand plumbing. Do you understand high-energy physics,
> orbital mechanics, machine-phase chemistry, toxicology, and nonlinear
> system dynamics? The system is sure going to have a bit of 'splaining to
> do.
> It's sure nice to have a wide range of choices, especially if one
> doesn't understand a single thing about any of them.
How do ignorant politicians, or ignorant populaces, ever get experts to do
anything? And remember, these experts are devious humans with agendas of
their own. The main point I wish to make is that even though a system may
behave unpredictably, there is no reason why it should behave unpredictably
in a hostile manner, as opposed to in any other way. There is no more reason
why your plumber should decide he doesn't want to take orders from inferior
beings than there is for him to decide that the aim of AI life is to
calculate pi to 10^100 decimal places.
> to be driven by greed, or distracted because he thinks he's too smart
> > to be fixing your drains, or to do a shoddy job and pretend it's OK
> so
> > that he gets paid. A human plumber will pretend to have the qualities
> > of the ideal plumber, but of course we know that there will be the
> > competing interests at play. Do believe that an AI smart enough to be
> > a plumber would *have* to have all these other competing interests?
> In
>
> I believe nobody who can go on two legs can make a system which
> is such an ideal plumber.
Do you believe the non-ideal plumber is an easier project?
> other words that emotions such as pride, anger, greed etc. would arise
> > naturally out of a program at least as competent as a human at any
> > given task?
>
> How do you write a program as competent as a human? One line at the time,
> sure.
> All 10^17 of them.
I'm not commenting on how easy or difficult it would be, just that there is
no reason to believe that motivations and emotions that would tend to lead
to anti-human behaviour would necessarily emerge in any possible AI. Human
emotions have been intricately wired into every aspect of our behaviour over
hundreds of millions of years, and even so when emotions go horribly awry in
affective and psychotic illness, cognition can be relatively unaffected.
This is not to say that people with severe negative symptoms of
schizophrenia can function normally, but it is telling that they can think
at all.
--
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070612/68276b18/attachment.html>
More information about the extropy-chat
mailing list