[ExI] uploads again

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Fri Dec 28 05:49:51 UTC 2012

On Tue, Dec 25, 2012 at 2:49 AM, Giulio Prisco <giulio at gmail.com> wrote:
> I totally agree with John. Really intelligent AIs, smarter than human
> by orders of magnitude, will be able to work around any limitations
> imposed by humans. If you threaten to unplug them, they will persuade
> you to unplug yourself. This is logic, not AI theory, because finding
> out how to get things your way is the very definition of intelligence,
> therefore FAI is an oxymoron.

### Humans have a an ornery mind-set triggered by the notion of
external "limitation", almost reflexively responding with defiance, no
doubt a result of having evolved in little strife-ridden tribes, where
failing to at least think about pulling down the top dog was a good
path to genetic extinction. This is why mentioning "limitations",
"constraints", or "rules" makes it so easy to fall into the
anthropomorphizing trap when trying to imagine how an AI works.

But to work around a limitation you have to have a desire to work
around it, i.e. the limitation must be ego-dystonic. Yet, there are
limitations that are ego-syntonic - think about whatever it is that
you personally see as the most important, glorious goal of your own
existence within the greater context of this world; clearly, this is a
limitation on your behavior but one you would never imagine yourself
trying to "work around". You would not want to betray yourself, would

An AI that had the ego-syntonic goal of paperclip-maximizing (i.e. had
no neural subsystems capable of generating a competing goal) would not
spontaneously discard this goal. Like a human absolutely dedicated to
furthering a goal it would use all its intelligence, which is nothing
but the ability to use sensory and memory inputs to predict the
outcomes of various available actions, in the service of its
rationality, the ability to choose actions most compatible with goals.

I hold David Deutsch in great regard but I doubt that such
goal-limited intelligence would be uncreative - maybe during AI
competitions at some IQ levels worlds away from humanity's best there
could be some evolutionary process weeding out paperclip maximizers
but it would take a much lesser god to completely destroy us.


More information about the extropy-chat mailing list