[ExI] Unfrendly AI is a mistaken idea.
Stathis Papaioannou
stathisp at gmail.com
Thu May 24 10:52:43 UTC 2007
On 24/05/07, Eugen Leitl <eugen at leitl.org> wrote:
>
> On Thu, May 24, 2007 at 08:12:19PM +1000, Stathis Papaioannou wrote:
>
> > Emotion is linked to motivation, not intelligence per se.
> Intelligence
> > is an ability, like being able to lift heavy things. The ability to
> > lift heavy things would never have evolved naturally without an
> > associated motivation to do so, but we build powerful lifting
> machines
> > that would sit there rusting if we didn't provide motivation for them
> > to do their thing. Similarly, there is nothing contradictory in a
> > machine capable of fantastically complex cognitive feats that would
> > just sit there inertly unless specifically offered a problem, and
> then
> > solve the problem as an intellectual exercise, completely
> > disinterested in any practical applications. There is even a model
> for
> > this in human mental illness: patients with so-called negative
> > symptoms of schizophrenia can be cognitively and physically intact,
> > but lack motivation and the ability to experience emotion.
>
> The critical points here are:
>
> 1) can we construct such mentally defect artifical agents, before
> we can build the other kind?
I would have assumed that it is easier to build machines without emotions. I
don't doubt that computers can have emotions, because the belief that the
brain is a machine necessitates this. However, although I see much evidence
of intelligence in computers even today, I don't see evidence of emotions.
This is a bit perplexing, because in the animal kingdom it doesn't take much
intelligence to be able to experience an emotion as basic and crude as pain.
It should in theory be possible to write a program which does little more
than experience pain when it is run, perhaps in proportion to some input
variable so that the programmer can then torture his creation. Maybe such
programs are already being accidentally implemented as subroutines in larger
programs, and we just don't know it.
2) is a population of such a stable system?
Probably more stable than a population of machines which already know what
they want and are busily scheming and self-modifying to get it. However, as
you have argued before there is always the possibility that some individual
in a population of tame AI's will spontaneously turn rogue, and then lord it
over all the other AI's and humans. On the other hand, a rogue AI will not
necessarily have any competitive advantage in terms of intelligence or power
compared to its tame siblings.
3) is it a good idea?
I think the safest way to proceed is to create AI's with the motivation of
the disinterested scientist, interested only in solving intellectual
problems (which is not present in the example of the schizophrenic). This
would even be preferable to designing them to love humans; many of the
greatest monsters of history thought they were doing the best thing for
humanity.
--
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070524/a9acc44e/attachment.html>
More information about the extropy-chat
mailing list