[extropy-chat] Fools building AIs
Eliezer S. Yudkowsky
sentience at pobox.com
Fri Oct 6 01:52:02 UTC 2006
Ben Goertzel wrote:
>>Perhaps they aren't even evil. Perhaps they are so disgusted with
>>human foibles that they really don't care much anymore whether much
>>brighter artificial minds are particularly friendly to humanity or not.
>>
>>- samantha
>
> Well put.
>
> I asked a friend recently what he thought about the prospect of
> superhuman beings annihilating humans to make themselves more
> processing substrate...
>
> His response: "It's about time."
>
> When asked what he meant, he said simply, "We're not so great, and
> we've been dominating the planet long enough. Let something better
> have its turn."
> I do not see why this attitude is inconsistent with a deep
> understanding of the nature of intelligence, and a profound
> rationality.
Hell, Ben, you aren't willing to say publicly that Arthur T. Murray is a
fool. In general you seem very reluctant to admit that certain things
are inconsistent with rationality - a charity level that *I* think is
inconsistent with rationality.
But anyway:
"In addition to standard biases, I have personally observed what look
like harmful modes of thinking specific to existential risks. The
Spanish flu of 1918 killed 25-50 million people. World War II killed 60
million people. 107 is the order of the largest catastrophes in
humanity's written history. Substantially larger numbers, such as 500
million deaths, and especially qualitatively different scenarios such as
the extinction of the entire human species, seem to trigger a different
mode of thinking - enter into a "separate magisterium". People who
would never dream of hurting a child hear of an existential risk, and
say, "Well, maybe the human species doesn't really deserve to survive."
There is a saying in heuristics and biases that people do not evaluate
events, but descriptions of events - what is called non-extensional
reasoning. The extension of humanity's extinction includes the death of
yourself, of your friends, of your family, of your loved ones, of your
city, of your country, of your political fellows. Yet people who would
take great offense at a proposal to wipe the country of Britain from the
map, to kill every member of the Democratic Party in the U.S., to turn
the city of Paris to glass - who would feel still greater horror on
hearing the doctor say that their child had cancer - these people will
discuss the extinction of humanity with perfect calm. "Extinction of
humanity", as words on paper, appears in fictional novels, or is
discussed in philosophy books - it belongs to a different context than
the Spanish flu. We evaluate descriptions of events, not extensions of
events. The cliché phrase end of the world invokes the magisterium of
myth and dream, of prophecy and apocalypse, of novels and movies. The
challenge of existential risks to rationality is that, the catastrophes
being so huge, people snap into a different mode of thinking. Human
deaths are suddenly no longer bad, and detailed predictions suddenly no
longer require any expertise, and whether the story is told with a happy
ending or a sad ending is a matter of personal taste in stories."
-- "Cognitive biases potentially affecting judgment of global risks"
I seriously doubt that your friend is processing that question with the
same part of his brain that he uses to decide e.g. whether to
deliberately drive into oncoming traffic or throw his three-year-old
daughter off a hotel balcony.
I've seen plenty of half-skilled rationalists fail by adopting separate
magisteria for different questions; they hold "spiritual" questions to a
different standard than they would use when writing a journal article.
Your friend, I suspect, is carrying out a form of non-extensional
reasoning which consists of reacting to verbal descriptions of events
quite differently than how he would react to witnessing even a small
sample of the human deaths involved.
This entire class of mistakes is harder to make, or at least much harder
to endorse in principle, if you have translated mathematics into
intuition, and now see thought processes as engines for achieving work -
once you reach this level, it does not seem as plausible to you that you
can get good models by various spiritual means, because this is
analogous to being able to draw a good map of a distant city by sitting
in your living room with your blinds drawn - there's no causal
explanation for how you are drawing a map by interacting with the
territory, which is how a properly functioning cognitive engine works.
When you understand intelligence properly, you will not deliberately
endorse separate magisteria, because you know in principle that
divisions separating e.g. biology from physics, are divisions that
humans make in academic subjects, not divisions in the things
themselves; Bayes's Theorem is not going to operate any differently in
the two cases.
Or as Richard Feynman put it:
"A poet once said, "The whole universe is in a glass of wine." We will
probably never know in what sense he said that, for poets do not write
to be understood. But it is true that if we look in glass of wine
closely enough we see the entire universe.
There are the things of physics: the twisting liquid which evaporates
depending on the wind and weather, the reflections in the glass, and our
imagination adds the atoms. The glass is a distillation of the earth's
rocks, and in its composition we see the secrets of the universe's age,
and the evolution of the stars. What strange array of chemicals are in
the wine? How did they come to be? There are the ferments, the enzymes,
the substrates, and the products. There in wine is found the great
generalization: all life is fermentation. Nobody can discover the
chemistry of wine without discovering the cause of much disease. How
vivid is the claret, pressing its existence into the consciousness that
watches it!
If in our small minds, for some convenience, divide this glass
of wine, this universe, into parts - physics, biology, geology,
astronomy, psychology, and so on - remember that nature does not know
it! So let us put it all back together, not forgetting ultimately what
it is for. Let us give one more final pleasure: drink it and forget it all!"
Highly skilled rationalists who understand intelligence are going to be
on guard against:
Separate magisteria;
Extensional neglect;
Scope neglect;
Inconsistent evaluations of different verbal descriptions of the same
events;
not to mention,
Failure to search for better third alternatives;
Fatuous philosophy that sounds like deep wisdom; and
Self-destructive impulses.
As usual, people who don't understand the above and have already
committed such mistakes, may be skeptical that modes of reasoning
committing these mistakes could be justifiably rejected by a more
skilled rationalist, just as creationists are skeptical that a more
skilled rationalist could justifiably reject creationism.
But a skilled rationalist who knows about extensional neglect is not
likely to endorse the destruction of Earth unless they also endorse the
death of Human_1, Human_2, Human_3, ... themselves, their mother, ...
Human_6e9.
Also I expect that your friend is making a mistake of simple fact, with
respect to what kind of superhumans these are likely to be - he thinks
they'll be better just because they've got more processing power, an old
old mistake I once made myself.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list