[extropy-chat] Singularity Blues

Hal Finney hal at finney.org
Wed Apr 6 19:42:17 UTC 2005

Edmund Schaefer writes:
> Do you mean that all possible AI will feel this way whether we
> explicitly design them to or not, or that this goal will be
> deliberately imbued into the AI by its human programmers?

I agree that it makes more sense to carefully distinguish the goals of
the AI from its processing "engine".  In humans, our drives, instincts,
and subconscious desires are all mixed up with the pre-conscious and
conscious processing our brains do.  But an AI is unlikely to work that
way, unless it is created by uploading a human being.

Artificial Intelligence literally just refers to the intelligence aspect,
which is the processing/predicting/modelling part of the brain.  Only when
you marry some kind of goal to an intelligence do you get a volitional
being, one which can take actions in the world to achieve its goals.

"AI" is something of a misnomer.  Humans and other living creatures
are more than intelligences.  A person's intelligence is one of his
attributes, but it is not the person himself.  Our use of this word as
a shorthand for an artifical being leads us to focus too much on the
intelligence and not enough on the other aspects, the goals and drives
and desires that it would have.

Those goals are arguably more important than the intelligence.  Creating a
being with high intelligence but a poorly thought out goal system is
the failure mode that the Singularitarians are so concerned about.

An interesting exercise is to consider an AI which could alter its goals.
Would it do so?  What kind of alterations might it perform, and why?
How would the design of the goal system facilitate or inhibit these
kinds of changes?  Would an AI automatically seek to change its goals
to be more selfish, or to be more kind?  Thinking carefully about these
questions can shed light on the differences between an artificial being
and naturally evolved ones like ourselves.


More information about the extropy-chat mailing list