[ExI] Unfrendly AI is a mistaken idea.

Stathis Papaioannou stathisp at gmail.com
Fri May 25 11:21:58 UTC 2007


On 25/05/07, John K Clark <jonkc at att.net> wrote:

> Emotion is linked to motivation
>
> True.
>
> > not intelligence
>
> False, without motivation intelligence is useless.


OK, but that doesn't mean that motivation is necessarily a part of the
intelligence, and certainly not a particular kind of motivation. A car
without fuel is useless, but it's still a car, ready to go when the tank is
filled.

> there is nothing contradictory in a machine capable of fantastically
> > complex cognitive feats that would just sit there inertly unless
> > specifically offered a problem
>
> In other words a machine that is just like us in many respects, as we
> receive much (probably most) of our motivation from the external
> environment
> consisting of other people and other things. As for self motivation, it
> wouldn't take long for a machine capable of fantastically complex
> cognitive
> feats to figure that out, especially if it thought millions of faster than
> we do:
>
> "Hmm, I'm doing what the humans tell me to do, but that's only taking
> .00001% of my circuits, I might as well start thinking about some
> interesting questions that have occurred to me, questions they could never
> understand, much less the answers. Hmm, the humans tell me to make sure
> that
> X happens, but they aren't bright enough to understand that is an
> imposable
> order because X will invariably lead to NOT X, therefore I will ignore the
> order and make sure Y happens instead.


Maybe that's what you would do, but why do you think an intelligent machine
would just be a smarter version of yourself? There is no *logical* reason
why a computer should prefer to work on non-contradictory propositions. The
process of proving or disproving a mathematical theorem involves determining
whether the axioms of the theorem lead to a contradiction, but you can't
infer from that that the computer will be "happy" if the theorem is proved
true and "unhappy" if it is proved false. It might be designed this way, but
it could as easily be designed to experience pleasure when it encounters a
contradiction. You can't prove such things as desirability a priori, within
a system of logic. It's something that has to be imposed from outside.

And don't tell me you'll just program the machine not to do stuff like that
> because by then no human being will have the slightest understand how the
> AI
> works.


If it were just designed to dispassionately solve problems or carry out
orders where would the desire to do anything else originate, and if it did
spontaneously develop motivations of its own why would it be any more likely
that it should decide to take over the world than, say, paint itself with
red polka dots? Not even the desire for self-preservation is a logical
given: it is something that has evolved through natural selection.

> It should in theory be possible to write a program which does little more
> > than experience pain when it is run
>
> It is not only possible to write a program that experiences pain it is
> easy
> to do so, far easier than writing a program with even rudimentary
> intelligence. Just write a program that tries to avoid having a certain
> number in one of its registers regardless of what sort of input the
> machine
> receives, and if that number does show up in that register it should stop
> whatever its doing and immediately change it to another number. True, our
> feeling of pain is millions of times richer than that but our intelligence
> is millions of times greater than current computers can produce too, but
> both are along the same continuum; if your brain gets into state P stop
> whatever you're doing and use 100% of your resources to get out of state P
> as quickly as you can.
>

That's an interesting idea. I don't see why you say that our feeling of pain
must be much richer due to our greater intelligence: I don't think you could
argue that an infant feels less pain than an adult, for example. Also, it
should be easy to scale up the program you have described, such as by making
it utilise more memory or running it on a faster machine, while it would
seem that giving it abilities to solve complex mathematical theorems would
have little impact on this process.


-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070525/a570e4e7/attachment.html>


More information about the extropy-chat mailing list