[extropy-chat] Singularitarian verses singularity
Eliezer S. Yudkowsky
sentience at pobox.com
Thu Dec 22 17:17:26 UTC 2005
The Avantguardian wrote:
>
> How can you be so certain of that super-human
> intelligence (artifical or otherwise) is going to be
> the "savior of mankind"? Especially when those people
> who are super intelligent seem incapable effecting the
> necessary changes to "save mankind". Look at Maria Vos
> Savant for example. She has the highest intelligence
> ever quantified and what does she do? She writes a
> column for Parade magazine. If she is happy doing that
> then by all means I won't criticize her choices in
> life but it does kind of make me skeptical that some
> computer is going wake up in somebody's basement one
> day, solve some massive equation, and change the world
> for the better.
Excerpt from a work in progress:
**
We tend to see individual differences instead of human universals. Thus
when someone says the word "intelligence", we think of Einstein, instead
of humans.
Individual differences of human intelligence have a standard label,
Spearman's g aka g-factor, a controversial interpretation of the solid
experimental result that different intelligent tests are highly
correlated with each other and with real-world outcomes such as lifetime
income. Spearman's g is a statistical abstraction from individual
differences of intelligence between humans, who as a species are far
more intelligent than lizards. Spearman's g is abstracted from
millimeter height differences among a species of giants.
We should not confuse Spearman's g with human general intelligence, our
capacity to handle a wide range of cognitive tasks incomprehensible to
other species. General intelligence is a between-species difference, a
complex adaptation, and a human universal found in all known cultures.
There may as yet be no academic consensus on intelligence, but there is
no doubt about the existence, or the power, of the
thing-to-be-explained. There is something about humans that let us set
our footprints on the Moon.
But the word "intelligence" commonly evokes pictures of the starving
professor with an IQ of 160 and the billionaire CEO with an IQ of merely
120. Indeed there are differences of individual ability apart from
"book smarts" which contribute to relative success in the human world:
enthusiasm, social skills, education, musical talent, rationality. Note
that each factor listed is cognitive. And jokes aside, you will not
find many CEOs, nor yet professors of academia, who are chimpanzees.
You will not find many acclaimed rationalists, nor artists, nor poets,
nor leaders, nor engineers, nor skilled networkers, nor martial artists,
nor musical composers who are mice. Intelligence is the foundation of
human power, the strength that fuels our other arts.
The danger of confusing general intelligence with g-factor is that it
leads to tremendously underestimating the potential impact of Artificial
Intelligence. (This applies to underestimating potential good impacts,
as well as potential bad impacts.) Even the phrase "transhuman AI" or
"artificial superintelligence" may still invoke images of
book-smarts-in-a-box: an AI that's really good at cognitive tasks
stereotypically associated with "intelligence", like chess or abstract
mathematics. But not superhumanly persuasive; or far better than humans
at predicting and manipulating human social situations; or inhumanly
creative in formulating long-term strategies. I am not saying to think
of Steve Jobs instead of Einstein - that's only the mirror version of
the error. The entire range from village idiot to Einstein, or from
Steve Wozniak to Steve Jobs, fits into a small dot on the range from
amoeba to human.
If the word "intelligence" evokes Einstein instead of humans (or Steve
Jobs instead of humans, or Alexander the Great instead of humans) then
it may sound sensible to say that intelligence is no match for a gun, as
if guns had grown on trees. It may sound sensible to say that
intelligence is no match for money, as if mice used money. Human beings
didn't start out with major assets in claws, teeth, armor, or any of the
other advantages that were the daily currency of other species. If you
had looked at humans from the perspective of the rest of the ecosphere,
there was no hint that the soft pink things would eventually clothe
themselves in armored tanks. We didn't win by fighting on other
species' battlegrounds. We had our own ideas of what mattered. Such is
the power of creativity.
**
> What evidence do I have for this skepticism? I need
> look no farther than this list. We are bunch of really
> intelligent people but are we doing anything to solve
> the problems humanity faces? No, we argue about
> whether Bush is a hero or a fraud and whether the
> color red is the same for everybody or not. Every time
> somebody publishes a press release that they have
> invented a better mouse trap or something, we twitter
> excitedly for a few days, then go back to bickering
> over minutae.
The Extropy list is one of the Singularity Institute's primary sources
of donors. This list is where Brian Atkins and Sabine Atkins (then
Sabine Stoeckel) and I got together and founded SIAI. If you aren't
doing anything, that's your own choice. If you dislike your choice,
change it!
The Singularity Institute is currently running a $100,000 Challenge
Grant. So's Alcor, if that's more to your taste. From bystander to
actor is a straightforward transformation, if you're dissatisfied with
cheering from the sidelines.
I agree that political yammering is a failure mode, which is why the SL4
list bans political discussion.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list