[Paleopsych] Edge Annual Question: What is Your Dangerous Idea?

Premise Checker checker at panix.com
Thu Jan 5 22:03:53 UTC 2006


Edge Annual Question: What is Your Dangerous Idea?
http://edge.org/q2006/q06_print.html

[Links omitted. There are 412 of them! Steven Pinker's, "Groups of people 
may differ genetically in their average talents and temperaments," is the 
one most likely to upset the equillibrium among rent-seeking coaltions in 
the near present. Others are far more dangerous in the long run.]

CONTRIBUTORS
______________________________________________________________________

Philip W. Anderson
Scott Atran
Mahzarin Banaji
Simon Baron-Cohen
Samuel Barondes
Gregory Benford
Paul Bloom
Jesse Bering
Jeremy Bernstein
Jamshed Bharucha
Susan Blackmore
David Bodanis
Stewart Brand
Rodney Brooks
David Buss
Philip Campbell
Leo Chalupa
Andy Clark
Gregory Cochran
Jerry Coyne
M. Csikszentmihalyi
Richard Dawkins
Paul Davies
Stanislas Deheane
Daniel C. Dennett
Keith Devlin
Jared Diamond
Denis Dutton
Freeman Dyson
George Dyson
Juan Enriquez
Paul Ewald
Todd Feinberg
Eric Fischl
Helen Fisher
Richard Foreman
Howard Gardner
Joel Garreau
David Gelernter
Neil Gershenfeld
Danie Gilbert
Marcelo Gleiser
Daniel Goleman
Brian Goodwin
Alison Gopnik
April Gornik
John Gottman
Brian Greene
Diane F. Halpern
Haim Harari
Judith Rich Harris
Sam Harris
Marc D. Hauser
W. Daniel Hillis
Donald Hoffman
Gerald Holton
John Horgan
Nicholas Humphrey
Piet Hut
Marco Iacoboni
Eric R. Kandel
Kevin Kelly
Bart Kosko
Stephen Kosslyn
Kai Krause
Ray Kurzweil
Jaron Lanier
David Lykken
Gary Marcus
Lynn Margulis
Thomas Metzinger
Geoffrey Miller
Oliver Morton
David G. Myers
Randolph Nesse
Richard E. Nisbett
Tor Nørretranders
James O'Donnell
John Allen Paulos
Irene Pepperberg
Clifford Pickover
Steven Pinker
David Pizarro
Jordan Pollack
Ernst Pöppel
Carolyn Porco
Robert Provine
VS Ramachandran
Martin Rees
Matt Ridley
Carlo Rovelli
Rudy Rucker
Douglas Rushkoff
Karl Sabbagh
Roger Schank
Scott Sampson
Charles Seife
Terrence Sejnowski
Martin Seligman
Robert Shapiro
Rupert Sheldrake
Michael Shermer
Clay Shirky
Barry Smith
Lee Smolin
Dan Sperber
Paul Steinhardt
Steven Strogatz
Leonard Susskind
Timothy Taylor
Frank Tipler
Arnold Trehub
Sherry Turkle
J. Craig Venter
Philip Zimbardo

WHAT IS YOUR DANGEROUS IDEA?

The history of science is replete with discoveries that were considered 
socially, morally, or emotionally dangerous in their time; the Copernican 
and Darwinian revolutions are the most obvious. What is your dangerous 
idea? An idea you think about (not necessarily one you originated) that is 
dangerous not because it is assumed to be false, but because it might be 
true?
__________________________________________________________________

[Thanks to Steven Pinker for suggesting the Edge Annual Question --
2006.]
__________________________________________________________________

January 1, 2006

To the Edge Community,

Last year's 2005 Edge Question -- "What do you believe is true even
though you cannot prove it?" -- generated many eye-opening responses
from a "who's who" of third culture scientists and science-minded
thinkers. The 120 contributions comprised a document of 60,000 words.
The New York Times ("Science Times") and Frankfurter Allgemeine
Zeitung ("Feuilliton") published excepts in their print and online
editions simultaneously with Edge publication.

The event was featured in major media across the
world: BBC Radio; Il Sole 24 Ore, Prospect, El Pais, The Financial
Express (Bangledesh), The Sunday Times (UK), The Sydney Morning
Herald, The Guardian, La Stampa, The Telegraph, among others. A book
based on the 2005 Question -- What We Believe But Cannot Prove:
Today's Leading Thinkers on Science in the Age of Certainty, with an
introduction by the novelist Ian McEwan -- was just published by the
Free Press (UK). The US edition follows from HarperCollins in
February, 2006.

Since September, Edge has been featured and/or cited in The Toronto
Star, Boston Globe, Seed, Rocky Mountain Mews, Observer, El Pais, La
Vanguaria (cover story) , El Mundo, Frankfurter Allgemeine Zeitung,
Science, Financial Times, Newsweek, AD, La Stampa, The Telegraph,
Quark (cover story), and The Wall Street Journal.

Online publication of the 2006 Question occurred on New Year's Day. To
date, the event has been covered by The Telegraph, The Guardian, The
Times, Arts & Letters Daily, Yahoo! News, and The Huffington Post.
___________________________________

Something radically new is in the air: new ways of understanding
physical systems, new ways of thinking about thinking that call into
question many of our basic assumptions.  A realistic biology of the
mind, advances in evolutionary biology, physics, information
technology, genetics, neurobiology, psychology, engineering, the
chemistry of materials: all are questions of critical importance with
respect to what it means to be human. For the first time, we have the
tools and the will to undertake the scientific study of human nature.

What you will find emerging out of the 119 original essays in the
75,000 word document written in response to the 2006 Edge Question --
"What is your dangerous idea?" -- are indications of a new natural
philosophy, founded on the realization of the import of complexity, of
evolution. Very complex systems -- whether organisms, brains, the
biosphere, or the universe itself -- were not constructed by design;
all have evolved. There is a new set of metaphors to describe
ourselves, our minds, the universe, and all of the things we know in
it.

Welcome to Edge. Welcome to "dangerous ideas". Happy New Year.

John Brockman
Publisher & Editor
__________________________________________________________________


John Brockman: The Edge Annual Question
Sun Jan 1, 2:28 PM

What you will find emerging out of the 117 essays written in response
to the 2006 Edge Question -- "What is your dangerous idea?" -- are
indications of a new natural philosophy, founded on the realization of
the import of complexity, of evolution. Very complex systems --
whether organisms, brains, the biosphere, or the universe itself --
were not constructed by design; all have evolved. There is a new set
of metaphors to describe ourselves, our minds, the universe, and all
of the things we know in it.
__________________________________________________________________

CONTRIBUTORS
__________________________________________________________________

MARTIN REES
President, The Royal Society; Professor of Cosmology & Astrophysics,
Master, Trinity College, University of Cambridge; Author, Our Final
Century: The 50/50 Threat to Humanity's Survival
[rees100.jpg]

Science may be 'running out of control'

Public opinion surveys (at least in the UK) reveal a generally
positive attitude to science. However, this is coupled with widespread
worry that science may be 'running out of control'. This latter idea
is, I think, a dangerous one, because if widely believed it could be
self-fulfilling.

In the 21st century, technology will change the world faster than ever
-- the global environment, our lifestyles, even human nature itself.
We are far more empowered by science than any previous generation was:
it offers immense potential -- especially for the developing world --
but there could be catastrophic downsides. We are living in the first
century when the greatest risks come from human actions rather than
from nature.

Almost any scientific discovery has a potential for evil as well as
for good; its applications can be channelled either way, depending on
our personal and political choices; we can't accept the benefits
without also confronting the risks. The decisions that we make,
individually and collectively, will determine whether the outcomes of
21st century sciences are benign or devastating. But there's' a real
danger that that, rather than campaigning energetically for optimum
policies, we'll be lulled into inaction by a feeling of fatalism -- a
belief that science is advancing so fast, and is so much influenced by
commercial and political pressures, that nothing we can do makes any
difference.

The present share-out of resources and effort between different
sciences is the outcome of a complicated 'tension' between many
extraneous factors. And the balance is suboptimal. This seems so
whether we judge in purely intellectual terms, or take account of
likely benefit to human welfare. Some subjects have had the 'inside
track' and gained disproportionate resources. Others, such as
environmental researches, renewable energy sources, biodiversity
studies and so forth, deserve more effort. Within medical research the
focus is disproportionately on cancer and cardiovascular studies, the
ailments that loom largest in prosperous countries, rather than on the
infectious diseases endemic in the tropics.

Choices on how science is applied -- to medicine, the environment, and
so forth -- should be the outcome of debate extending way beyond the
scientific community. Far more research and development can be done
than we actually want or can afford to do; and there are many
applications of science that we should consciously eschew.

Even if all the world's scientific academies agreed that a specific
type of research had a specially disquieting net 'downside' and all
countries, in unison, imposed a ban, what is the chance that it could
be enforced effectively enough? In view of the failure to control drug
smuggling or homicides, it is unrealistic to expect that, when the
genie is out of the bottle, we can ever be fully secure against the
misuse of science. And in our ever more interconnected world,
commercial pressure are harder to control and regulate. The challenges
and difficulties of 'controlling' science in this century will indeed
be daunting.

Cynics would go further, and say that anything that is scientifically
and technically possible will be done -- somewhere, sometime --
despite ethical and prudential objections, and whatever the regulatory
regime. Whether this idea is true or false, it's an exceedingly
dangerous one, because it's engenders despairing pessimism, and
demotivates efforts to secure a safer and fairer world. The future
will best be safeguarded -- and science has the best chance of being
applied optimally -- through the efforts of people who are less
fatalistic.
__________________________________________________________________

J. CRAIG VENTER
Genomics Researcher; Founder & President, J. Craig Venter Science
Foundation
[venter100.jpg]

Revealing the genetic basis of personality and behavior will create
societal conflicts
From our initial analysis of the sequence of the human genome,
particularly with the much smaller than expected number of human
genes, the genetic determinists seemed to have clearly suffered a
setback. After all, those looking for one gene for each human trait
and disease couldn't possibly be accommodated with as few as
twenty-odd thousand genes when hundreds of thousands were anticipated.
Deciphering the genetic basis of human behavior has been a complex and
largely unsatisfying endeavor due to the limitations of the existing
tools of genetic trait analysis particularly with complex traits
involving multiple genes.

All this will soon undergo a revolutionary transformation. The rate of
change of DNA sequencing technology is continuing at an exponential
pace. We are approaching the time when we will go from having a few
human genome sequences to complex databases containing first tens, to
hundreds of thousands, of complete genomes, then millions. Within a
decade we will begin rapidly accumulating the complete genetic code of
humans along with the phenotypic repertoire of the same individuals.
By performing multifactorial analysis of the DNA sequence variations,
together with the comprehensive phenotypic information gleaned from
every branch of human investigatory discipline, for the first time in
history, we will be able to provide answers to quantitatively
questions of what is genetic versus what is due to the environment.
This is already taking place in cancer research where we can measure
the differences in genetic mutations inherited from our parents versus
those acquired over our lives from environmental damage. This good
news will help transform the treatment of cancer by allowing us to
know which proteins need to be targeted.

However, when these new powerful computers and databases are used to
help us analyze who we are as humans, will society at large, largely
ignorant and afraid of science, be ready for the answers we are likely
to get?

For example, we know from experiments on fruit flies that there are
genes that control many behaviors, including sexual activity. We
sequenced the dog genome a couple of years ago and now an additional
breed has had its genome decoded. The canine world offers a unique
look into the genetic basis of behavior. The large number of distinct
dog breeds originated from the wolf genome by selective breeding, yet
each breed retains only subsets of the wolf behavior spectrum. We know
that there is a genetic basis not only of the appearance of the breeds
with 30-fold difference in weight and 6-fold in height but in their
inherited actions. For example border collies can use the power of
their stare to herd sheep instead of freezing them in place prior to
devouring them.

We attribute behaviors in other mammalian species to genes and
genetics but when it comes to humans we seem to like the notion that
we are all created equal, or that each child is a "blank slate". As we
obtain the sequences of more and more mammalian genomes including more
human sequences, together with basic observations and some common
sense, we will be forced to turn away from the politically correct
interpretations, as our new genomic tool sets provide the means to
allow us to begin to sort out the reality about nature or nurture. In
other words, we are at the threshold of a realistic biology of
humankind.

It will inevitably be revealed that there are strong genetic
components associated with most aspects of what we attribute to human
existence including personality subtypes, language capabilities,
mechanical abilities, intelligence, sexual activities and preferences,
intuitive thinking, quality of memory, will power, temperament,
athletic abilities, etc. We will find unique manifestations of human
activity linked to genetics associated with isolated and/or inbred
populations.
The danger rests with what we already know: that we are not all
created equal. Further danger comes with our ability to quantify and
measure the genetic side of the equation before we can fully
understand the much more difficult task of evaluating environmental
components of human existence. The genetic determinists will appear to
be winning again, but we cannot let them forget the range of potential
of human achievement with our limiting genetic repertoire.
__________________________________________________________________

LEO CHALUPA
Ophthalmologist and Neurobiologist, University of California, Davis
[chalupa100.jpg]

A 24-hour period of absolute solitude

Our brains are constantly subjected to the demands of multi-tasking
and a seemingly endless cacophony of information from diverse sources.
Cell phones, emails, computers, and cable television are omnipresent,
not to mention such archaic venues as books, newspapers and magazines.

This induces an unrelenting barrage of neuronal activity that in turn
produces long-lasting structural modification in virtually all
compartments of the nervous system. A fledging industry touts the
virtues of exercising your brain for self-improvement. Programs are
offered for how to make virtually any region of your neocortex a more
efficient processor. Parents are urged to begin such regimes in
preschool children and adults are told to take advantage of their
brain's plastic properties for professional advancement. The evidence
documenting the veracity for such claims is still outstanding, but one
thing is clear. Even if brain exercise does work, the subsequent waves
of neuronal activities stemming from simply living a modern lifestyle
are likely to eradicate the presumed hard-earned benefits of brain
exercise.

My dangerous idea is that what's needed to attain optimal brain
performance -- with or without prior brain exercise -- is a 24-hour
period of absolute solitude. By absolute solitude I mean no verbal
interactions of any kind (written or spoken, live or recorded) with
another human being. I would venture that a significantly higher
proportion of people reading these words have tried skydiving than
experienced one day of absolute solitude.

What to do to fill the waking hours? That's a question that each
person would need to answer for him/herself. Unless you've spent time
in a monastery or in solitary confinement it's unlikely that you've
had to deal with this issue. The only activity not proscribed is
thinking. Imagine if everyone in this country had the opportunity to
do nothing but engage in uninterrupted thought for one full day a
year!
A national day of absolute solitude would do more to improve the
brains of all Americans than any other one-day program. (I leave it to
the lawmakers to figure out a plan for implementing this proposal.)The
danger stems from the fact that a 24 period for uninterrupted thinking
could cause irrevocable upheavals in much of what our society
currently holds sacred.But whether that would improve our present
state of affairs cannot be guaranteed.
__________________________________________________________________

V.S. RAMACHANDRAN
Neuroscientist; Director, Center for Brain and Cognition, University
of California, San Diego; Author, A Brief Tour of Human Consciousness
[rama100.gif]

Francis Crick's "Dangerous Idea"

I am a brain, my dear Watson, and the rest of me is a mere
appendage.
-- Sherlock Holmes

An idea that would be "dangerous if true" is what Francis Crick
referred to as "the astonishing hypothesis"; the notion that our
conscious experience and sense of self is based entirely on the
activity of a hundred billion bits of jelly -- the neurons that
constitute the brain. We take this for granted in these enlightened
times but even so it never ceases to amaze me.

Some scholars have criticized Cricks tongue-in-cheek phrase (and title
of his book) on the grounds that the hypothesis he refers to is
"neither astonishing nor a hypothesis". (Since we already know it to
be true) Yet the far reaching philosophical, moral and ethical
dilemmas posed by his hypothesis have not been recognized widely
enough. It is in many ways the ultimate dangerous idea .

Lets put this in historical perspective.

Freud once pointed out that the history of ideas in the last few
centuries has been punctuated by "revolutions" major upheavals of
thought that have forever altered our view of ourselves and our place
in the cosmos.

First there was the Copernican system dethroning the earth as the
center of the cosmos.

Second was the Darwinian revolution; the idea that far from being the
climax of "intelligent design" we are merely neotonous apes that
happen to be slightly cleverer than our cousins.

Third, the Freudian view that even though you claim to be "in charge"
of your life, your behavior is in fact governed by a cauldron of
drives and motives of which you are largely unconscious.

And fourth, the discovery of DNA and the genetic code with its
implication (to quote James Watson) that "There are only molecules.
Everything else is sociology".

To this list we can now add the fifth, the "neuroscience revolution"
and its corollary pointed out by Crick -- the "astonishing hypothesis"
-- that even our loftiest thoughts and aspirations are mere byproducts
of neural activity. We are nothing but a pack of neurons.

If all this seems dehumanizing, you haven't seen anything yet.

[Editor's Note: An lengthly essay by Ramachandran on this subject is
scheduled for publication by Edge in January.]
__________________________________________________________________

DAVID BUSS
Psychologist, University of Texas, Austin; Author, The Murderer Next
Door: Why the Mind is Designed to Kill
[buss101..gif]

The Evolution of Evil

When most people think of torturers, stalkers, robbers, rapists, and
murderers, they imagine crazed drooling monsters with maniacal Charles
Manson-like eyes. The calm normal-looking image starring back at you
from the bathroom mirror reflects a truer representation. The
dangerous idea is that all of us contain within our large brains
adaptations whose functions are to commit despicable atrocities
against our fellow humans -- atrocities most would label evil.
The unfortunate fact is that killing has proved to be an effective
solution to an array of adaptive problems in the ruthless evolutionary
games of survival and reproductive competition: Preventing injury,
rape, or death; protecting one's children; eliminating a crucial
antagonist; acquiring a rival's resources; securing sexual access to a
competitor's mate; preventing an interloper from appropriating one's
own mate; and protecting vital resources needed for reproduction.
The idea that evil has evolved is dangerous on several counts. If our
brains contain psychological circuits that can trigger murder,
genocide, and other forms of malevolence, then perhaps we can't hold
those who commit carnage responsible: "It's not my client's fault,
your honor, his evolved homicide adaptations made him do it."
Understanding causality, however, does not exonerate murderers,
whether the tributaries trace back to human evolution history or to
modern exposure to alcoholic mothers, violent fathers, or the ills of
bullying, poverty, drugs, or computer games. It would be dangerous if
the theory of the evolved murderous mind were misused to let killers
free.
The evolution of evil is dangerous for a more disconcerting reason. We
like to believe that evil can be objectively located in a particular
set of evil deeds, or within the subset people who perpetrate horrors
on others, regardless of the perspective of the perpetrator or victim.
That is not the case. The perspective of the perpetrator and victim
differ profoundly. Many view killing a member of one's in-group, for
example, to be evil, but take a different view of killing those in the
out-group. Some people point to the biblical commandment "thou shalt
not kill" as an absolute. Closer biblical inspection reveals that this
injunction applied only to murder within one's group.
Conflict with terrorists provides a modern example. Osama bin Laden
declared: "The ruling to kill the Americans and their allies --
civilians and military -- is an individual duty for every Muslim who
can do it in any country in which it is possible to do it." What is
evil from the perspective of an American who is a potential victim is
an act of responsibility and higher moral good from the terrorist's
perspective. Similarly, when President Bush identified an "axis of
evil," he rendered it moral for Americans to kill those falling under
that axis -- a judgment undoubtedly considered evil by those whose
lives have become imperiled.
At a rough approximation, we view as evil people who inflict massive
evolutionary fitness costs on us, our families, or our allies. No one
summarized these fitness costs better than the feared conqueror
Genghis Khan (1167-1227): "The greatest pleasure is to vanquish your
enemies, to chase them before you, to rob them of their wealth, to see
their near and dear bathed in tears, to ride their horses and sleep on
the bellies of their wives and daughters."
We can be sure that the families of the victims of Genghis Khan saw
him as evil. We can be just as sure that his many sons, whose harems
he filled with women of the conquered groups, saw him as a venerated
benefactor. In modern times, we react with horror at Mr. Khan
describing the deep psychological satisfaction he gained from
inflicting fitness costs on victims while purloining fitness fruits
for himself. But it is sobering to realize that perhaps half a percent
of the world's population today are descendants of Genghis Khan.
On reflection, the dangerous idea may not be that murder historically
has been advantageous to the reproductive success of killers; nor that
we all house homicidal circuits within our brains; nor even that all
of us are lineal descendants of ancestors who murdered. The danger
comes from people who refuse to recognize that there are dark sides of
human nature that cannot be wished away by attributing them to the
modern ills of culture, poverty, pathology, or exposure to media
violence. The danger comes from failing to gaze into the mirror and
come to grips the capacity for evil in all of us.
__________________________________________________________________

PAUL BLOOM
Psychologist, Yale University; Author, Descartes' Baby
[bloom100.jpg]

There are no souls

I am not concerned here with the radical claim that personal identity,
free will, and consciousness do not exist. Regardless of its merit,
this position is so intuitively outlandish that nobody but a
philosopher could take it seriously, and so it is unlikely to have any
real-world implications, dangerous or otherwise.

Instead I am interested in the milder position that mental life has a
purely material basis. The dangerous idea, then, is that Cartesian
dualism is false. If what you mean by "soul" is something immaterial
and immortal, something that exists independently of the brain, then
souls do not exist. This is old hat for most psychologists and
philosophers, the stuff of introductory lectures. But the rejection of
the immaterial soul is unintuitive, unpopular, and, for some people,
downright repulsive.

In the journal "First Things", Patrick Lee and Robert P. George
outline some worries from a religious perspective.

"If science did show that all human acts, including conceptual thought
and free choice, are just brain processes,... it would mean that the
difference between human beings and other animals is only
superficial-a difference of degree rather than a difference in kind;
it would mean that human beings lack any special dignity worthy of
special respect. Thus, it would undermine the norms that forbid
killing and eating human beings as we kill and eat chickens, or
enslaving them and treating them as beasts of burden as we do horses
or oxen."

The conclusions don't follow. Even if there are no souls, humans might
differ from non-human animals in some other way, perhaps with regard
to the capacity for language or abstract reasoning or emotional
suffering. And even if there were no difference, it would hardly give
us license to do terrible things to human beings. Instead, as Peter
Singer and others have argued, it should make us kinder to non-human
animals. If a chimpanzee turned out to possess the intelligence and
emotions of a human child, for instance, most of us would agree that
it would be wrong to eat, kill, or enslave it.

Still, Lee and George are right to worry that giving up on the soul
means giving up on a priori distinction between humans and other
creatures, something which has very real consequences. It would affect
as well how we think about stem-cell research and abortion,
euthenasia, cloning, and cosmetic psychopharmacology. It would have
substantial implications for the legal realm -- a belief in immaterial
souls has led otherwise sophisticated commentators to defend a
distinction between actions that we do and actions that our brains do.
We are responsible only for the former, motivating the excuse that
Michael Gazzaniga has called, "My brain made me do it." It has been
proposed, for instance, that if a pedophile's brain shows a certain
pattern of activation while contemplating sex with a child, he should
not be viewed as fully responsible for his actions. When you give up
on the soul, and accept that all actions correspond to brain activity,
this sort of reasoning goes out the window.

The rejection of souls is more dangerous than the idea that kept us so
occupied in 2005 -- evolution by natural selection. The battle between
evolution and creationism is important for many reasons; it is
where science takes a stand against superstition. But, like the origin
of the universe, the origin of the species is an issue of great
intellectual importance and little practical relevance. If everyone
were to become a sophisticated Darwinian, our everyday lives would
change very little. In contrast, the widespread rejection of the soul
would have profound moral and legal consequences. It would also
require people to rethink what happens when they die, and give up the
idea (held by about 90% of Americans) that their souls will survive
the death of their bodies and ascend to heaven. It is hard to get more
dangerous than that.
__________________________________________________________________

PHILIP CAMPBELL
Editor-in Chief, Nature
[campbell100.jpg]

Scientists and governments developing public engagement about science
and technology are missing the point
This turns out to be true in cases where there are collapses in
consensus that have serious societal consequences. Whether in relation
to climate change, GM crops or the UK's triple vaccine for measles,
mumps and rubella, alternative science networks develop amongst people
who are neither ignorant nor irrational, but have perceptions about
science, the scientific literature and its implications that differ
from those prevailing in the scientific community. These perceptions
and discussions may be half-baked, but are no less powerful for all
that, and carry influence on the internet and in the media.
Researchers and governments haven't yet learned how to respond to such
"citizen's science". Should they stop explaining and engaging? No. But
they need also to understand better the influences at work within such
networks -- often too dismissively stereotyped -- at an early stage in
the debate in order to counter bad science and minimize the impacts of
falsehoods.
__________________________________________________________________

JESSE BERING
Psychologist, University of Arkansas
[bering100.jpg]

Science will never silence God

With each meticulous turn of the screw in science, with each
tightening up of our understanding of the natural world, we pull more
taut the straps over God's muzzle. From botany to bioengineering, from
physics to psychology, what is science really but true Revelation --
and what is Revelation but the negation of God? It is a humble pursuit
we scientists engage in: racing to reality. Many of us suffer the
harsh glare of the American theocracy, whose heart still beats loud
and strong in this new year of the 21st century. We bravely favor
truth, in all its wondrous, amoral, and 'meaningless' complexity over
the singularly destructive Truth born of the trembling minds of our
ancestors. But my dangerous idea, I fear, is that no matter how far
our thoughts shall vault into the eternal sky of scientific progress,
no matter how dazzling the effects of this progress, God will always
bite through his muzzle and banish us from the starry night of
humanistic ideals.
Science is an endless series of binding and rebinding his breath;
there will never be a day when God does not speak for the majority.
There will never be a day even when he does not whisper in the most
godless of scientists' ears. This is because God is not an idea, nor a
cultural invention, not an 'opiate of the masses' or any such thing;
God is a way of thinking that was rendered permanent by natural
selection.
As scientists, we must toil and labor and toil again to silence God,
but ultimately this is like cutting off our ears to hear more clearly.
God too is a biological appendage; until we acknowledge this fact for
what it is, until we rear our children with this knowledge, he will
continue to howl his discontent for all of time.
__________________________________________________________________

PAUL W. EWALD
Evolutionary Biologist; Director, Program in Evolutionary Medicine,
University of Louisville; Author, Plague Time
[ewald100.gif]

A New Golden Age of Medicine

My dangerous idea is that we have in hand most of the information we
need to facilitate a new golden age of medicine. And what we don't
have in hand we can get fairly readily by wise investment in targeted
research and intervention. In this golden age we should be able to
prevent most debilitating diseases in developed and undeveloped
countries within a relatively short period of time with much less
money than is generally presumed. This is good news. Why is it
dangerous?

One array of dangers arises because ideas that challenge the status
quo threaten the livelihood of many. When the many are embedded in
powerful places the threat can be stifling, especially when a lot of
money and status are at stake. So it is within the arena of medical
research and practice. Imagine what would happen if the big diseases
-- cancers, arteriosclerosis, stroke, diabetes -- were largely
prevented.
Big pharmas would become small because the demand for prescription
drugs would drop. The prestige of physicians would drop because they
would no longer be relied upon to prolong life. The burgeoning
industry of biomedical research would shrink because governmental and
private funding for this research would diminish. Also threatened
would be scientists whose sense of self-worth is built upon the grant
dollars they bring in for discovering miniscule parts of big puzzles.
Scientists have been beneficiaries of the lack of progress in recent
decades, which has caused leaders such as the past head of NIH, Harold
Varmus, to declare that what is needed is more basic research. But
basic research has not generated many great advancements in the
prevention or cure of disease in recent decades.
The major exception is in the realm of infectious disease where many
important advancements were generated from tiny slices of funding. The
discovery that peptic ulcers are caused by infections that can be
cured with antibiotics is one example. Another is the discovery that
liver cancer can often be prevented by a vaccine against the hepatitis
B virus or by screening blood for hepatitis B and C viruses.

The track record of the past few decades shows that these examples are
not quirks. They are part of a trend that goes back over a century to
the beginning of the germ theory itself. And the accumulating evidence
supporting infectious causation of big bad diseases of modern society
is following the same pattern that occurred for diseases that have
been recently accepted as caused by infection.
The process of acceptance typically occurs over one or more decades
and accords with Schopenhauer's generalization about the establishment
of truth: it is first ridiculed, then violently opposed, and finally
accepted as being self-evident. Just a few groups of pathogens seem to
be big players: streptococci, Chlamydia, some bacteria of the oral
cavity, hepatitis viruses, and herpes viruses. If the correlations
between these pathogens and the big diseases of wealthy countries does
in fact reflect infectious causation, effective vaccines against these
pathogens could contribute in a big way to a new golden age of
medicine that could rival the first half of the 20th century.
The transition to this golden age, however, requires two things: a
shift in research effort to identifying the pathogens that cause the
major diseases and development of effective interventions against
them. The first would be easy to bring about by restructuring the
priorities of NIH -- where money goes, so go the researchers. The
second requires mechanisms for putting in place programs that cannot
be trusted to the free market for the same kinds of reasons that Adam
Smith gave for national defense. The goals of the interventions do not
mesh nicely with the profit motive of the free market. Vaccines, for
example, are not very profitable.
Pharmas cannot make as much money by selling one vaccine per person to
prevent a disease as they can selling a patented drug like Vioxx which
will be administered day after day, year after year to treat symptoms
of an illness that is never cured. And though liability issues are
important for such symptomatic treatment, the pharmas can argue
forcefully that drugs with nasty side effects provide some benefit
even to those who suffer most from the side effects because the drugs
are given not to prevent an illness but rather to people who already
have an illness. This sort of defense is less convincing when the
victim is a child who developed permanent brain damage from a rare
complication of a vaccine that was given to protect them against a
chronic illness that they might have acquired decades later.

Another part of this vision of a new golden age will be the ability to
distinguish real threats from pseudo-threats. This ability will allow
us to invest in policy and infrastructure that will protect people
against real threats without squandering resources and destroying
livelihoods in efforts to protect against pseudo-threats. Our present
predicament on this front is far from this ideal.
Today experts on infectious diseases and institutions entrusted to
protect and improve human health sound the alarm in response to each
novel threat. The current fears over a devastating pandemic of bird
flu is a case in point. Some of the loudest voices offer a simplistic
argument: failing to prepare for the worst-case scenarios is
irresponsible and dangerous. This criticism has been recently leveled
at me and others who question expert proclamations, such as those from
the World Health Organization and the Centers for Disease Control.
These proclamations inform us that H5N1 bird flu virus poses an
imminent threat of an influenza pandemic similar to or even worse than
the 1918 pandemic. I have decreased my popularity in such circles by
suggesting that the threat of this scenario is essentially
nonexistent. In brief I argue that the 1918 influenza viruses evolved
their unique combination of high virulence and high transmissibility
in the conditions at the Western Front of World War I.
By transporting contagious flu patients into a series of tightly
packed groups of susceptible individuals, personnel fostered
transmission from people who were completely immobilized by their
illness. Such conditions must have favored the predator-like variants
of the influenza virus; these variants would have a competitive edge
because they could ruthlessly exploit a person for their own
replication and still get transmitted to large numbers of susceptible
individuals.
These conditions have not recurred in human populations since then
and, accordingly, we have never had any outbreaks of influenza viruses
that have been anywhere near as harmful as those that emerged at the
Western Front. So long as we do not allow such conditions to occur
again we have little to fear from a reevolution of such a predatory
virus.

The fear of a 1918 style pandemic has fueled preparations by a
government which, embarrassed by its failure to deal adequately with
the damage from Katrina, seems determined to prepare for any perceived
threat to save face. I would have no problem with the accusation of
irresponsibility if preparations for a 1918 style pandemic were cost
free. But they are not.
The $7 billion that the Bush administration is planning as a
downpayment for pandemic preparedness has to come from somewhere. If
money is spent to prepare for an imaginary pandemic, our progress
could be impeded on other fronts that could lead to or have already
established real improvements in public health.
Conclusions about responsibility or irresponsibility of this argument
require that the threat from pandemic influenza be assessed relative
to the damage that results from the procurement of the money from
other sources. The only reliable evidence of the damage from pandemic
influenza under normal circumstances is the experience of the two
pandemics that have occurred since 1918, one in 1957 and the other in
1968. The mortality caused by these pandemics was one-tenth to
one-hundredth the death toll from the 1918 pandemic.
We do need to be prepared for an influenza pandemic of the normal
variety, just as we needed to be prepared for category 5 hurricanes in
the Gulf of Mexico. If possible our preparations should allow us to
stop an incipient pandemic before it materializes. In contrast with
many of the most vocal experts I do not conclude that our surveillance
efforts will be quickly overwhelmed by a highly transmissible
descendent of the influenza virus that has generated the most recent
fright (dubbed H5N1). The transition of the H5N1 virus to a pandemic
virus would require evolutionary change.
The dialogue on this matter, however, continues to neglect the primary
mechanism of the evolutionary change: natural selection. Instead it is
claimed that H5N1 could mutate to become a full-fledged human virus
that is both highly transmissible and highly lethal. Mutation provides
only the variation on which natural selection acts. We must consider
natural selection if we are to make meaningful assessments of the
danger posed by the H5N1 virus.
The evolution of the 1918 virus was gradual, and both evidence and
theory lead to the conclusion that any evolution of increased
transmissibility of H5N1 from human to human will be gradual, as it
was with SARS. With surveillance we can detect such changes in humans
and intervene to stop further spread as was done with SARS. We do not
need to trash the economy of southeast asia each year to accomplish
this.

The dangerous vision of a golden age does not leave the poor countries
behind. As I have discussed in my articles and books, we should be
able to control much of the damage caused by the major killers in poor
countries by infrastructural improvements that not only reduce the
frequency of infection but also cause the infectious agents to evolve
toward benignity.
This integrated approach offers the possibility to remodel our current
efforts against the major killers -- AIDS, malaria, tuberculosis,
dysentery and the like. We should be able to move from just holding
ground to institution of the changes that created the freedom from
acute infectious diseases that have been enjoyed by inhabitants of
rich countries over the past century.

Dangerous indeed! Excellent solutions are often dangerous to the
status quo because they they work. One measure of danger to some but
success to the general population is the extent to which highly
specialized researchers, physicians, and other health care workers
will need to retrain, and the extent to which hospitals and
pharmaceutical companies will need to downsize. That is what happens
when we introduce excellent solutions to health problems. We need not
be any more concerned about these difficulties than the loss of the
iron lung industry and the retraining of polio therapists and
researchers in the wake of the Salk vaccine.
_________________________________________________________________

BART KOSKO
Professor, Electrical Engineering, USC; Author, Heaven in a Chip
[kosko100.jpg]
Most bell curves have thick tails

Any challenge to the normal probability bell curve can have
far-reaching consequences because a great deal of modern science and
engineering rests on this special bell curve. Most of the standard
hypothesis tests in statistics rely on the normal bell curve either
directly or indirectly. These tests permeate the social and medical
sciences and underlie the poll results in the media. Related tests and
assumptions underlie the decision algorithms in radar and cell phones
that decide whether the incoming energy blip is a 0 or a 1. Management
gurus exhort manufacturers to follow the "six sigma" creed of reducing
the variance in products to only two or three defective products per
million in accord with "sigmas" or standard deviations from the mean
of a normal bell curve. Models for trading stock and bond derivatives
assume an underlying normal bell-curve structure. Even quantum and
signal-processing uncertainty principles or inequalities involve the
normal bell curve as the equality condition for minimum uncertainty.
Deviating even slightly from the normal bell curve can sometimes
produce qualitatively different results.

The proposed dangerous idea stems from two facts about the normal bell
curve.

First: The normal bell curve is not the only bell curve. There are at
least as many different bell curves as there are real numbers. This
simple mathematical fact poses at once a grammatical challenge to the
title of Charles Murray's IQ book The Bell Curve. Murray should have
used the indefinite article "A" instead of the definite article "The."
This is but one of many examples that suggest that most scientists
simply equate the entire infinite set of probability bell curves with
the normal bell curve of textbooks. Nature need not share the same
practice. Human and non-human behavior can be far more diverse than
the classical normal bell curve allows.

Second: The normal bell curve is a skinny bell curve. It puts most of
its probability mass in the main lobe or bell while the tails quickly
taper off exponentially. So "tail events" appear rare simply as an
artifact of this bell curve's mathematical structure. This limitation
may be fine for approximate descriptions of "normal" behavior near the
center of the distribution. But it largely rules out or marginalizes
the wide range of phenomena that take place in the tails.

Again most bell curves have thick tails. Rare events are not so rare
if the bell curve has thicker tails than the normal bell curve has.
Telephone interrupts are more frequent. Lightning flashes are more
frequent and more energetic. Stock market fluctuations or crashes are
more frequent. How much more frequent they are depends on how thick
the tail is -- and that is always an empirical question of fact.
Neither logic nor assume-the-normal-curve habit can answer the
question. Instead scientists need to carry their evidentiary burden a
step further and apply one of the many available statistical tests to
determine and distinguish the bell-curve thickness.

One response to this call for tail-thickness sensitivity is that logic
alone can decide the matter because of the so-called central limit
theorem of classical probability theory. This important "central"
result states that some suitably normalized sums of random terms will
converge to a standard normal random variable and thus have a normal
bell curve in the limit. So Gauss and a lot of other long-dead
mathematicians got it right after all and thus we can continue to
assume normal bell curves with impunity.

That argument fails in general for two reasons.

The first reason it fails is that the classical central limit theorem
result rests on a critical assumption that need not hold and that
often does not hold in practice. The theorem assumes that the random
dispersion about the mean is so comparatively slight that a particular
measure of this dispersion -- the variance or the standard deviation
-- is finite or does not blow up to infinity in a mathematical sense.
Most bell curves have infinite or undefined variance even though they
have a finite dispersion about their center point. The error is not in
the bell curves but in the two-hundred-year-old assumption that
variance equals dispersion. It does not in general. Variance is a
convenient but artificial and non-robust measure of dispersion. It
tends to overweight "outliers" in the tail regions because the
variance squares the underlying errors between the values and the
mean. Such squared errors simplify the math but produce the infinite
effects. These effects do not appear in the classical central limit
theorem because the theorem assumes them away.

The second reason the argument fails is that the central limit theorem
itself is just a special case of a more general result called the
generalized central limit theorem. The generalized central limit
theorem yields convergence to thick-tailed bell curves in the general
case. Indeed it yields convergence to the thin-tailed normal bell
curve only in the special case of finite variances. These general
cases define the infinite set of the so-called stable probability
distributions and their symmetric versions are bell curves. There are
still other types of thick-tailed bell curves (such as the Laplace
bell curves used in image processing and elsewhere) but the stable
bell curves are the best known and have several nice mathematical
properties. The figure below shows the normal or Gaussian bell curve
superimposed over three thicker-tailed stable bell curves. The catch
in working with stable bell curves is that their mathematics can be
nearly intractable. So far we have closed-form solutions for only two
stable bell curves (the normal or Gaussian and the very-thick-tailed
Cauchy curve) and so we have to use transform and computer techniques
to generate the rest. Still the exponential growth in computing power
has long since made stable or thick-tailed analysis practical for many
problems of science and engineering.

This last point shows how competing bell curves offer a new context
for judging whether a given set of data reasonably obey a normal bell
curve. One of the most popular eye-ball tests for normality is the PP
or probability plot of the data. The data should almost perfectly fit
a straight line if the data come from a normal probability
distribution. But this seldom happens in practice. Instead real data
snake all around the ideal straight line in a PP diagram. So it is
easy for the user to shrug and a call any data deviation from the
ideal line good enough in the absence of a direct bell-curve
competitor. A fairer test is to compare the normal PP plot with the
best-fitting thick-tailed or stable PP plot. The data may well line up
better in a thick-tailed PP diagram than it does in the usual normal
PP diagram. This test evidence would reject the normal bell-curve
hypothesis in favor of the thicker-tailed alternative. Ignoring these
thick-tailed alternatives favors accepting the less-accurate normal
bell curve and thus leads to underestimating the occurrence of tail
events.

Stable or thick-tailed probability curves continue to turn up as more
scientists and engineers search for them. They tend to accurately
model impulsive phenomena such as noise in telephone lines or in the
atmosphere or in fluctuating economic assets. Skewed versions appear
to best fit the data for the Ethernet traffic in bit packets. Here
again the search is ultimately an empirical one for the best-fitting
tail thickness. Similar searches will only increase as the math and
software of thick-tailed bell curves work their way into textbooks on
elementary probability and statistics. Much of it is already freely
available on the Internet.

Thicker-tail bell curves also imply that there is not just a single
form of pure white noise. Here too there are at least as many forms of
white noise (or any colored noise) as there are real numbers.
Whiteness just means that the noise spikes or hisses and pops are
independent in time or that they do not correlate with one another.
The noise spikes themselves can come from any probability distribution
and in particular they can come from any stable or thick-tailed bell
curve. The figure below shows the normal or Gaussian bell curve and
three kindred thicker-tailed bell curves and samples of their
corresponding white noise. The normal curve has the upper-bound alpha
parameter of 2 while the thicker-tailed curves have lower values --
tail thickness increases as the alpha parameter falls. The white noise
from the thicker-tailed bell curves becomes much more impulsive as
their bell narrows and their tails thicken because then more extreme
events or noise spikes occur with greater frequency.

[image001.jpg]

Competing bell curves: The figure on the left shows four superimposed
symmetric alpha-stable bell curves with different tail thicknesses
while the plots on the right show samples of their corresponding forms
of white noise. The parameter [image002.gif] describes the thickness
of a stable bell curve and ranges from 0 to 2. Tails grow thicker as
[image003.gif] grows smaller. The white noise grows more impulsive as
the tails grow thicker. The Gaussian or normal bell curve
[image004.gif] has the thinnest tail of the four stable curves while
the Cauchy bell curve [image005.gif] has the thickest tails and thus
the most impulsive noise. Note the different magnitude scales on the
vertical axes. All the bell curves have finite dispersion while only
the Gaussian or normal bell curve has a finite variance or finite
standard deviation.

My colleagues and I have recently shown that most mathematical models
of spiking neurons in the retina can not only benefit from small
amounts of added noise by increasing their Shannon bit count but they
still continue to benefit from added thick-tailed or
"infinite-variance" noise. The same result holds experimentally for a
carbon nanotube transistor that detects signals in the presence of
added electrical noise.

Thick-tailed bell curves further call into question what counts as a
statistical "outlier" or bad data: Is a tail datum error or pattern?
The line between extreme and non-extreme data is not just fuzzy but
depends crucially on the underlying tail thickness.

The usual rule of thumb is that the data is suspect if it lies outside
three or even two standard deviations from the mean. Such rules of
thumb reflect both the tacit assumption that dispersion equals
variance and the classical central-limit effect that large data sets
are not just approximately bell curves but approximately thin-tailed
normal bell curves. An empirical test of the tails may well justify
the latter thin-tailed assumption in many cases. But the mere
assertion of the normal bell curve does not. So "rare" events may not
be so rare after all.
_________________________________________________________________

MATT RIDLEY
Science Writer; Founding chairman of the International Centre for
Life; Author, The Agile Gene: How Nature Turns on Nature
[ridley100.jpg]

Government is the problem not the solution

In all times and in all places there has been too much government. We
now know what prosperity is: it is the gradual extension of the
division of labour through the free exchange of goods and ideas, and
the consequent introduction of efficiencies by the invention of new
technologies. This is the process that has given us health, wealth and
wisdom on a scale unimagined by our ancestors. It not only raises
material standards of living, it also fuels social integration,
fairness and charity. It has never failed yet. No society has grown
poorer or more unequal through trade, exchange and invention. Think of
pre-Ming as opposed to Ming China, seventeenth century Holland as
opposed to imperial Spain, eighteenth century England as opposed to
Louis XIV's France, twentieth century America as opposed to Stalin's
Russia, or post-war Japan, Hong Kong and Korea as opposed to Ghana,
Cuba and Argentina. Think of the Phoenicians as opposed to the
Egyptians, Athens as opposed to Sparta, the Hanseatic League as
opposed to the Roman Empire. In every case, weak or decentralised
government, but strong free trade led to surges in prosperity for all,
whereas strong, central government led to parasitic, tax-fed
officialdom, a stifling of innovation, relative economic decline and
usually war.

Take Rome. It prospered because it was a free trade zone. But it
repeatedly invested the proceeds of that prosperity in too much
government and so wasted it in luxury, war, gladiators and public
monuments. The Roman empire's list of innovations is derisory, even
compared with that of the 'dark ages' that followed.

In every age and at every time there have been people who say we need
more regulation, more government. Sometimes, they say we need it to
protect exchange from corruption, to set the standards and police the
rules, in which case they have a point, though often they exaggerate
it. Self-policing standards and rules were developed by free-trading
merchants in medieval Europe long before they were taken over and
codified as laws (and often corrupted) by monarchs and governments.
Sometimes, they say we need it to protect the weak, the victims of
technological change or trade flows. But throughout history such
intervention, though well meant, has usually proved misguided --
because its progenitors refuse to believe in (or find out about) David
Ricardo's Law of Comparative Advantage: even if China is better at
making everything than France, there will still be a million things it
pays China to buy from France rather than make itself. Why? Because
rather than invent, say, luxury goods or insurance services itself,
China will find it pays to make more T shirts and use the proceeds to
import luxury goods and insurance.

Government is a very dangerous toy. It is used to fight wars, impose
ideologies and enrich rulers. True, nowadays, our leaders do not
enrich themselves (at least not on the scale of the Sun King), but
they enrich their clients: they preside over vast and insatiable
parasitic bureaucracies that grow by Parkinson's Law and live off true
wealth creators such as traders and inventors.

Sure, it is possible to have too little government. Only, that has not
been the world's problem for millennia. After the century of Mao,
Hitler and Stalin, can anybody really say that the risk of too little
government is greater than the risk of too much? The dangerous idea we
all need to learn is that the more we limit the growth of government,
the better off we will all be.
_________________________________________________________________

DAVID PIZARRO
Psychologist, Cornell University
[pizarro100.jpg]

Hodgepodge Morality

What some individuals consider a sacrosanct ability to perceive moral
truths may instead be a hodgepodge of simpler psychological
mechanisms, some of which have evolved for other purposes.

It is increasingly apparent that our moral sense comprises a fairly
loose collection of intuitions, rules of thumb, and emotional
responses that may have emerged to serve a variety of functions, some
of which originally had nothing at all to do with ethics. These
mechanisms, when tossed in with our general ability to reason, seem to
be how humans come to answer the question of good and evil, right and
wrong. Intuitions about action, intentionality, and control, for
instance, figure heavily into our perception of what constitutes an
immoral act. The emotional reactions of empathy and disgust likewise
figure into our judgments of who deserves moral protection and who
doesn't. But the ability to perceive intentions probably didn't evolve
as a way to determine who deserves moral blame. And the emotion of
disgust most likely evolved to keep us safe from rotten meat and
feces, not to provide information about who deserves moral protection.

Discarding the belief that our moral sense provides a royal road to
moral truth is an uncomfortable notion. Most people, after all, are
moral realists. They believe acts are objectively right or wrong, like
math problems. The dangerous idea is that our intuitions may be poor
guides to moral truth, and can easily lead us astray in our everyday
moral decisions.
_________________________________________________________________

RANDOPLH M. NESSE
Psychiatrist, University of Michigan; Coauthor (with George Williams),
Why We Get Sick: The New Science of Darwinian Medicine
[nesse100.jpg]

Unspeakable Ideas

The idea of promoting dangerous ideas seems dangerous to me. I spend
considerable effort to prevent my ideas from becoming dangerous,
except, that is, to entrenched false beliefs and to myself. For
instance, my idea that bad feelings are useful for our genes upends
much conventional wisdom about depression and anxiety. I find,
however, that I must firmly restrain journalists who are eager to
share the sensational but incorrect conclusion that depression should
not be treated. Similarly, many people draw dangerous inferences from
my work on Darwinian medicine. For example, just because fever is
useful does not mean that it should not be treated. I now emphasize
that evolutionary theory does not tell you what to do in the clinic,
it just tells you what studies need to be done.

I also feel obligated to prevent my ideas from becoming dangerous on a
larger scale. For instance, many people who hear about Darwinian
medicine assume incorrectly that it implies support for eugenics. I
encourage them to read history as well as my writings. The record
shows how quickly natural selection was perverted into Social
Darwinism, an ideology that seemed to justify letting poor people
starve. Related ideas keep emerging. We scientists have a
responsibility to challenge dangerous social policies incorrectly
derived from evolutionary theory. Racial superiority is yet another
dangerous idea that hurts real people. More examples come to mind all
too easily and some quickly get complicated. For instance, the idea
that men are inherently different from women has been used to justify
discrimination, but the idea that men and women have identical
abilities and preferences may also cause great harm.

While I don't want to promote ideas dangerous to others, I am
fascinated by ideas that are dangerous to anyone who expresses them.
These are "unspeakable ideas." By unspeakable ideas I don't mean those
whose expression is forbidden in a certain group. Instead, I propose
that there is class of ideas whose expression is inherently dangerous
everywhere and always because of the nature of human social groups.
Such unspeakable ideas are anti-memes. Memes, both true and false,
spread fast because they are interesting and give social credit to
those who spread them. Unspeakable ideas, even true important ones,
don't spread at all, because expressing them is dangerous to those who
speak them.

So why, you may ask, is a sensible scientist even bringing the idea
up? Isn't the idea of unspeakable ideas a dangerous idea? I expect I
will find out. My hope is that a thoughtful exploration of unspeakable
ideas should not hurt people in general, perhaps won't hurt me much,
and might unearth some long-neglected truths.

Generalizations cannot substitute for examples, even if providing
examples is risky. So, please gather your own data. Here is an
experiment. The next time you are having a drink with an enthusiastic
fan for your hometown team, say "Well, I think our team just isn't
very good and didn't deserve to win." Or, moving to more risky
territory, when your business group is trying to deal with a savvy
competitor, say, "It seems to me that their product is superior
because they are smarter than we are." Finally, and I cannot recommend
this but it offers dramatic data, you could respond to your spouse's
difficulties at work by saying, "If they are complaining about you not
doing enough, it is probably because you just aren't doing your fair
share." Most people do not need to conduct such social experiments to
know what happens when such unspeakable ideas are spoken.

Many broader truths are equally unspeakable. Consider, for instance,
all the articles written about leadership. Most are infused with
admiration and respect for a leader's greatness. Much rarer are
articles about the tendency for leadership positions to be attained by
power-hungry men who use their influence to further advance their
self-interest. Then there are all the writings about sex and marriage.
Most of them suggest that there is some solution that allows full
satisfaction for both partners while maintaining secure relationships.
Questioning such notions is dangerous, unless you are a comic, in
which case skepticism can be very, very funny.

As a final example, consider the unspeakable idea of unbridled
self-interest. Someone who says, "I will only do what benefits me,"
has committed social suicide. Tendencies to say such things have been
selected against, while those who advocate goodness, honesty and
service to others get wide recognition. This creates an illusion of a
moral society that then, thanks to the combined forces of natural and
social selection, becomes a reality that makes social life vastly more
agreeable.

There are many more examples, but I must stop here. To say more would
either get me in trouble or falsify my argument. Will I ever publish
my "Unspeakable Essays?"  It would be risky, wouldn't it?
_________________________________________________________________

GREGORY BENFORD
Physicist, UC Irvine; Author, Deep Time
[benford100.jpg]

Think outside the Kyoto box

Few economists expect the Kyoto Accords to attain their goals. With
compliance coming only slowly and with three big holdouts -- the US,
China and India -- it seems unlikely to make much difference in
overall carbon dioxide increases. Yet all the political pressure is on
lessening our fossil fuel burning, in the face of fast-rising demand.
This pits the industrial powers against the legitimate economic
aspirations of the developing world -- a recipe for conflict.
Those who embrace the reality of global climate change mostly insist
that there is only one way out of the greenhouse effect -- burn less
fossil fuel, or else. Never mind the economic consequences. But the
planet itself modulates its atmosphere through several tricks, and we
have little considered using most of them. The overall global problem
is simple: we capture more heat from the sun than we radiate away.
Mostly this is a good thing, else the mean planetary temperature would
hover around freezing. But recent human alterations of the atmosphere
have resulted in too much of a good thing.
Two methods are getting little attention: sequestering carbon from the
air and reflecting sunlight.
Hide the Carbon
There are several schemes to capture carbon dioxide from the air:
promote tree growth; trap carbon dioxide from power plants in
exhausted gas domes; or let carbon-rich organic waste fall into the
deep oceans. Increasing forestation is a good, though rather limited,
step. Capturing carbon dioxide from power plants costs about 30% of
the plant output, so it's an economic nonstarter.
That leaves the third way. Imagine you are standing in a ripe Kansas
cornfield, staring up into a blue summer sky. A transparent acre-area
square around you extends upwards in an air-filled tunnel, soaring all
the way to space. That long tunnel holds carbon in the form of
invisible gas, carbon dioxide -- widely implicated in global climate
change. But how much?
Very little, compared with how much we worry about it. The corn
standing as high as an elephant's eye all around you holds four
hundred times as much carbon as there is in man-made carbon dioxide --
our villain -- in the entire column reaching to the top of the
atmosphere. (We have added a few hundred parts per million to our air
by burning.) Inevitably, we must understand and control the
atmosphere, as part of a grand imperative of directing the entire
global ecology. Yearly, we manage through agriculture far more carbon
than is causing our greenhouse dilemma.

Take advantage of that. The leftover corn cobs and stalks from our
fields can be gathered up, floated down the Mississippi, and dropped
into the ocean, sequestering it. Below about a kilometer depth,
beneath a layer called the thermocline, nothing gets mixed back into
the air for a thousand years or more. It's not a forever solution, but
it would buy us and our descendents time to find such answers. And it
is inexpensive; cost matters.
The US has large crop residues. It has also ignored the Kyoto Accord,
saying it would cost too much. It would, if we relied purely on
traditional methods, policing energy use and carbon dioxide emissions.
Clinton-era estimates of such costs were around $100 billion a year --
a politically unacceptable sum, which led Congress to reject the very
notion by a unanimous vote.
But if the US simply used its farm waste to "hide" carbon dioxide from
our air, complying with Kyoto's standard would cost about $10 billion
a year, with no change whatsoever in energy use.
The whole planet could do the same. Sequestering crop leftovers could
offset about a third of the carbon we put into our air.
The carbon dioxide we add to our air will end up in the oceans,
anyway, from natural absorption, but not nearly quickly enough to help
us.

Reflect Away Sunlight
Hiding carbon from air is only one example of ways the planet has
maintained its perhaps precarious equilibrium throughout billions of
years. Another is our world's ability to edit sunlight, by changing
cloud cover.
As the oceans warm, water evaporates, forming clouds. These reflect
sunlight, reducing the heat below -- but just how much depends on
cloud thickness, water droplet size, particulate density -- a forest
of detail.
If our climate starts to vary too much, we could consider deliberately
adjusting cloud cover in selected areas, to offset unwanted heating.
It is not actually hard to make clouds; volcanoes and fossil fuel
burning do it all the time by adding microscopic particles to the air.
Cloud cover is a natural mechanism we can augment, and another area
where possibility of major change in environmental thinking beckons.
A 1997 US Department of Energy study for Los Angeles showed that
planting trees and making blacktop and rooftops lighter colored could
significantly cool the city in summer. With minimal costs that get
repaid within five years we can reduce summer midday temperatures by
several degrees. This would cut air conditioning costs for the
residents, simultaneously lowering energy consumption, and lessening
the urban heat island effect. Incoming rain clouds would not rise as
much above the heat blossom of the city, and so would rain on it less.
Instead, clouds would continue inland to drop rain on the rest of
Southern California, promoting plant growth. These methods are now
under way in Los Angeles, a first experiment.
We can combine this with a cloud-forming strategy. Producing clouds
over the tropical oceans is the most effective way to cool the planet
on a global scale, since the dark oceans absorb the bulk of the sun's
heat. This we should explore now, in case sudden climate changes force
us to act quickly.
Yet some environmentalists find all such steps suspect. They smack of
engineering, rather than self-discipline. True enough -- and that's
what makes such thinking dangerous, for some.
Yet if Kyoto fails to gather momentum, as seems probable to many, what
else can we do? Turn ourselves into ineffectual Mommy-cop states, with
endless finger-pointing politics, trying to equally regulate both the
rich in their SUVs and Chinese peasants who burn coal for warmth? Our
present conventional wisdom might be termed The Puritan Solution --
Abstain, sinners! -- and is making slow, small progress. The Kyoto
Accord calls for the industrial nations to reduce their carbon dioxide
emissions to 7% below the 1990 level, and globally we are farther from
this goal every year.
These steps are early measures to help us assume our eventual 21st
Century role, as true stewards of the Earth, working alongside Nature.
Recently Billy Graham declared that since the Bible made us stewards
of the Earth, we have a holy duty to avert climate change. True
stewards use the Garden's own methods.
_________________________________________________________________

MARCO IACOBONI
Neuroscientist; Director, Transcranial Magnetic Stimulation Lab, UCLA
[iacoboni100.gif]

Media Violence Induces Imitative Violence: The Problem With Super
Mirrors

Media violence induces imitative violence. If true, this idea is
dangerous for at least two main reasons. First, because its
implications are highly relevant to the issue of freedom of speech.
Second, because it suggests that our rational autonomy is much more
limited than we like to think. This idea is especially dangerous now,
because we have discovered a plausible neural mechanism that can
explain why observing violence induces imitative violence. Moreover,
the properties of this neural mechanism -- the human mirror neuron
system -- suggest that imitative violence may not always be a
consciously mediated process. The argument for protecting even harmful
speech (intended in a broad sense, including movies and videogames)
has typically been that the effects of speech are always under the
mental intermediation of the listener/viewer. If there is a plausible
neurobiological mechanism that suggests that such intermediate step
can be by-passed, this argument is no longer valid.

For more than 50 years behavioral data have suggested that media
violence induces violent behavior in the observers. Meta-data show
that the effect size of media violence is much larger than the effect
size of calcium intake on bone mass, or of asbestos exposure to
cancer. Still, the behavioral data have been criticized. How is that
possible? Two main types of data have been invoked. Controlled
laboratory experiments and correlational studies assessing types of
media consumed and violent behavior. The lab data have been criticized
on the account of not having enough ecological validity, whereas the
correlational data have been criticized on the account that they have
no explanatory power. Here, as a neuroscientist who is studying the
human mirror neuron system and its relations to imitation, I want to
focus on a recent neuroscience discovery that may explain why the
strong imitative tendencies that humans have may lead them to
imitative violence when exposed to media violence.

Mirror neurons are cells located in the premotor cortex, the part of
the brain relevant to the planning, selection and execution of
actions. In the ventral sector of the premotor cortex there are cells
that fire in relation to specific goal-related motor acts, such as
grasping, holding, tearing, and bringing to the mouth. Surprisingly, a
subset of these cells -- what we call mirror neurons -- also fire when
we observe somebody else performing the same action. The behavior of
these cells seems to suggest that the observer is looking at her/his
own actions reflected by a mirror, while watching somebody else's
actions. My group has also shown in several studies that human mirror
neuron areas are critical to imitation. There is also evidence that
the activation of this neural system is fairly automatic, thus
suggesting that it may by-pass conscious mediation. Moreover, mirror
neurons also code the intention associated with observed actions, even
though there is not a one-to-one mapping between actions and
intentions (I can grasp a cup because I want to drink or because I
want to put it in the dishwasher). This suggests that this system can
indeed code sequences of action (i.e., what happens after I grasp the
cup), even though only one action in the sequence has been observed.

Some years ago, when we still were a very small group of
neuroscientists studying mirror neurons and we were just starting
investigating the role of mirror neurons in intention understanding,
we discussed the possibility of super mirror neurons. After all, if
you have such a powerful neural system in your brain, you also want to
have some control or modulatory neural mechanisms. We have now
preliminary evidence suggesting that some prefrontal areas have super
mirrors. I think super mirrors come in at least two flavors. One is
inhibition of overt mirroring, and the other one -- the one that might
explain why we imitate violent behavior, which require a fairly
complex sequence of motor acts -- is mirroring of sequences of motor
actions. Super mirror mechanisms may provide a fairly detailed
explanation of imitative violence after being exposed to media
violence.
_________________________________________________________________

BARRY C. SMITH
Philosopher, Birbeck, University of London; Coeditor, Knowing Our Own
Minds
[smithb100.gif]

What We Know May Not Change Us

Human beings, like everything else, are part of the natural world. The
natural world is all there is. But to say that everything that exists
is just part of the one world of nature is not the same as saying that
there is just one theory of nature that will describes and explain
everything that there is. Reality may be composed of just one kind of
stuff and properties of that stuff but we need many different kinds of
theories at different levels of description to account for everything
there is.

Theories at these different levels may not be reduced one to another.
What matters is that they be compatible with one another. The
astronomy Newton gave us was a triumph over supernaturalism because it
united the mechanics of the sub-lunary world with an account of the
heavenly bodies. In a similar way, biology allowed us to advance from
a time when we saw life in terms of an elan vital. Today, the biggest
challenge is to explain our powers of thinking and imagination, our
abilities to represent and report our thoughts: the very means by
which we engage in scientific theorising. The final triumph of the
natural sciences over supernaturalism will be an account of nature of
conscious experience. The cognitive and brain sciences have done much
to make that project clearer but we are still a long way from a fully
satisfying theory.

But even if we succeed in producing a theory of human thought and
reason, of perception, of conscious mental life, compatible with other
theories of the natural and biological world, will we relinquish our
cherished commonsense conceptions of ourselves as human beings, as
selves who know ourselves best, who deliberate and decide freely on
what to do and how to live? There is much evidence that we won't. As
humans we conceive ourselves as centres of experience, self-knowing
and free willing agents. We see ourselves and others as acting on our
beliefs, desires, hopes and fears, and has having responsibility for
much that we do and all that we say. And even as results in
neuroscience begin to show how much more automated, routinised and
pre-conscious much of our behaviour is, we are remain unable to let go
of the self-beliefs that govern our day to day rationalisings and
dealings with others.

We are perhaps incapable of treating others as mere machines, even if
that turns out to be what we are. The self-conceptions we have are
firmly in place and sustained in spite of our best findings, and it
may be a fact about human beings that it will always be so. We are
curious and interested in neuroscientists findings and we wonder at
them and about their applications to ourselves, but as the great
naturalistic philosopher David Hume knew, nature is too strong in us,
and it will not let us give up our cherished and familiar ways of
thinking for long. Hume knew that however curious an idea and vision
of ourselves we entertained in our study, or in the lab, when we
returned to the world to dine, make merry with our friends our most
natural beliefs and habits returned and banished our stranger thoughts
and doubts. It is likely, as this end of the year, that whatever we
have learned and whatever we know about the error of our thinkings and
about the fictions we maintain, they will still remain the most
dominant guiding force in our everyday lives. We may not be comforted
by this, but as creatures with minds who know they have minds --
perhaps the only minded creatures in nature in this position -- we are
at least able to understand our own predicament.
_________________________________________________________________

PHILIP W. ANDERSON
Physicist, Princeton University; Nobel Laureate in Physics 1977;
Author, Economy as a Complex Evolving System
[anderson100.jpg]
Dark Energy might not exist

Let's try one in cosmology. The universe contains at least 3 and
perhaps 4 very different kinds of matter, whose origins probably are
physically completely different. There is the Cosmic Background
Radiation (CBR) which is photons from the later parts of the Big Bang
but is actually the residue of all the kinds of radiation that were in
the Bang, like flavored hadrons and mesons which have annihilated and
become photons. You can count them and they tell you pretty well how
many quanta of radiation there were in the beginning; and observation
tells us that they were pretty uniformly distributed, in fact very,
and still are.
Next is radiant matter -- protons, mostly, and electrons. There are
only a billionth as many of them as quanta of CBR, but as radiation in
the Big Bang there were pretty much the same number, so all but one
out of a billion combined with an antiparticle and annihilated.
Nonetheless they are much heavier than the quanta of CBR, so they
have, all told, much more mass, and have some cosmological effect on
slowing down the Hubble expansion. There was an imbalance -- but what
caused that? That imbalance was generated by some totally independent
process, possibly during the very turbulent inflationary era.

In fact out to a tenth of the Hubble radius, which is as far as we can
see, the protons are very non-uniformly distributed, in a fractal
hierarchical clustering with things called "Great Walls" and giant
near-voids. The conventional idea is that this is all caused by
gravitational instability acting on tiny primeval fluctuations, and it
barely could be, but in order to justify that you have to have another
kind of matter.
So you need -- and actually see, but indirectly -- Dark Matter, which
is 30 times as massive, overall, as protons but you can't see anything
but its gravitational effects. No one has much clue as to what it is
but it seems to have to be assumed it is hadronic, otherwise why would
it be anything as close as a factor 30 to the protons? But really,
there is no reason at all to suppose its origin was related to the
other two, you know only that if it's massive quanta of any kind it is
nowhere near as many as the CBR, and so most of them annihilated in
the early stages. Again, we have no excuse for assuming that the
imbalance in the Dark Matter was uniformly distributed primevally,
even if the protons were, because we don't know what it is.

Finally, of course there is Dark Energy, that is if there is. On that
we can't even guess if it is quanta at all, but again we note that if
it is it probably doesn't add up in numbers to the CBR. The very
strange coincidence is that when we add this in there isn't any total
gravitation at all, and the universe as a whole is flat, as it would
be, incidentally, if all of the heavy parts were distributed
everywhere according to some random, fractal distribution like that of
the matter we can see -- because on the largest scale, a fractal's
density extrapolates to zero. That suggestion, implying that Dark
Energy might not exist, is considered very dangerously radical.

The posterior probability of any particular God is pretty small

Here's another, which compared to many other peoples' propositions
isn't so radical. Isn't God very improbable? You can't in any logical
system I can understand disprove the existence of God, or prove it for
that matter. But I think that in the probability calculus I use He is
very improbable.

There are a number of ways of making a formal probability theory which
incorporate Ockham's razor, the principle that one must not multiply
hypotheses unnecessarily. Two are called Bayesian probability theory,
and Minimum Entropy. If you have been taking data on something, and
the data are reasonably close to a straight line, these methods give
us a definable procedure by which you can estimate the probability
that the straight line is correct, not the polynomial which has as
many parameters as there are points, or some intermediate complex
curve. Ockham's razor is expressed mathematically as the fact that
there is a factor in the probability derived for a given hypothesis
that decreases exponentially in the number N of parameters that
describe your hypothesis -- it is the inverse of the volume of
parameter space. People who are trying to prove the existence of ESP
abominate Bayesianism and this factor because it strongly favors the
"Null hypothesis" and beats them every time.

Well, now, imagine how big the parameter space is for God. He could
have a long gray beard or not, be benevolent or malicious in a lot of
different ways and over a wide range of values, he can have a variety
of views on abortion, contraception, like or abominate human images,
like or abominate music, and the range of dietary prejudices He has
been credited with is as long as your arm. There is the heaven-hell
dimension, the one vs three question, and I haven't even mentioned
polytheism. I think there are certainly as many parameters as sects,
or more. If there is even a sliver of prior probability for the null
hypothesis, the posterior probability of any particular God is pretty
small.
_________________________________________________________________

TIMOTHY TAYLOR
Archaeologist, University of Bradford; Author, The Buried Soul
l [taylor100.jpg]

The human brain is a cultural artefact.

Phylogenetically, humans represent an evolutionary puzzle. Walking on
two legs free the hands to do new things, like chip stones to make
modified tools -- the first artefacts, dating to 2.7 million years ago
-- but it also narrows the pelvis and dramatically limits the size of
possible fetal cranium. Thus the brain expansion that began after 2
million years ago should not have happened.

But imagine that, alongside chipped stone tools, one genus of hominin
appropriates the looped entrails of a dead animal, or learns to tie a
simple knot, and invents a sling (chimpanzees are known to carry water
in leaves and gorillas to measure water depth with sticks, so the
practical and abstract thinking required here can be safely assumed
for our human ancestors by this point).

In its sling, the hominin child can now hip ride with little
impairment to its parent's hands-free movement. This has the
unexpected and certainly unplanned consequence that it is no longer
important for it to be able to hang on as chimps do. Although, due to
the bio-mechanical constraints of a bipedal pelvis, the hominin child
cannot be born with a big head (thus large initial brain capacity) it
can now be born underdeveloped. That is to say, the sling frees
fetuses to be born in an ever more ontogenically retarded state. This
trend, which humans do indeed display, is called neoteny. The
retention of earlier features for longer means that the total
developmental sequence is extended in time far beyond the nine months
of natural gestation. Hominin children, born underdeveloped, could
grow their crania outside the womb in the pseudo-marsupial pouch of an
infant-carrying sling.

From this point onwards it is not hard to see how a distinctively
human culture emerges through the extra-uterine formation of higher
cognitive capacities -- the phylogenetic and ontogenic icing on the
cake of primate brain function. The child, carried by the parent into
social situations, watches vocalization. Parental selection for smart
features such as an ability to babble early may well, as others have
suggested, have driven the brain size increases until 250,000 years
ago -- a point when the final bio-mechanical limits of big-headed
mammals with narrow pelvises were reached by two species: Neanderthals
and us.

This is the phylogeny side of the case. In terms of ontogeny the
obvious applies -- it recapitulates phylogeny. The underdeveloped
brains of hominin infants were culture-prone, and in this sense, I do
not dissent from Dan Sperber's dangerous idea that `culture is
natural'. But human culture, unlike the basic culture of learned
routines and tool-using observed in various mammals, is a system of
signs -- essentially the association of words with things and the
ascription and recognition of value in relation to this.

As Ernest Gellner once pointed out, taken cross-culturally, as a
species, humans exhibit by far the greatest range of behavioural
variation of any animal. However, within any on-going community of
people, with language, ideology and a culturally-inherited and
developed technology, conformity has usually been a paramount value,
with death often the price for dissent. My belief is that, due to the
malleability of the neotenic brain, cultural systems are physically
built into the developing tissue of the mind.

Instead of seeing the brain as the genetic hardware into which the
cultural software is loaded, and then arguing about the relative
determining influences of each in areas such as, say, sexual
orientation or mathematical ability (the old nature-nurture debate),
we can conclude that culture (as Richard Dawkins long ago noted in
respect of contraception) acts to subvert genes, but is also enabled
by them. Ontogenic retardation allowed both environment and the
developing milieu of cultural routines to act on brain hardware
construction alongside the working through of the genetic blueprint.
Just because the modern human brain is coded for by genes does not
mean that the critical self-consciousness for which it (within its own
community of brains) is famous is non-cultural any more than a
barbed-and-tanged arrowhead is non-cultural just because it is made of
flint.

The human brain has a capacity to go not just beyond nature, but
beyond culture too, by dissenting from old norms and establishing
others. The emergence of the high arts and science is part of this
process of the human brain, with its instrumental extra-somatic
adaptations and memory stores (books, laboratories, computers), and is
underpinned by the most critical thing that has been brought into
being in the encultured human brain: free will.

However, not all humans, or all human communities, seem capable of
equal levels of free-will. In extreme cases they appear to display
none at all. Reasons include genetic incapacity, but it is also
possible for a lack of mental freedom to be culturally engendered, and
sometimes even encouraged. Archaeologically, the evidence is there
from the first farming societies in Europe: the Neolithic massacre at
Talheim, where an entire community was genocidally wiped out except
for the youngest children, has been taken as evidence (supported by
anthropological analogies) of the re-enculturation of still flexible
minds within the community of the victors, to serve and live out their
orphaned lives as slaves. In the future, one might surmise that the
dark side of the development of virtual reality machines (described by
Clifford Pickover) will be the infinitely more subtle cultural
programming of impressionable individuals as sophisticated
conformists.

The interplay of genes and culture has produced in us potential for a
formidable range of abilities and intelligences. It is critical that
in the future we both fulfil and extend this potential in the realm of
judgment, choice and understanding in both sciences and arts. But the
idea of the brain as a cultural artefact is dangerous. Those with an
interest in social engineering -- tyrants and authoritarian regimes --
will almost certainly attempt to develop it to their advantage.
Free-will is threatening to the powerful who, by understanding its
formation, will act to undermine it in sophisticated ways. The
usefulness of cultural artefacts that have the degree of complexity of
human brains makes our own species the most obvious candidate for the
enhanced super-robot of the future, not just smart factory operatives
and docile consumers, but cunning weapons-delivery systems (suicide
bombers) and conformity-enforcers. At worst, the very special
qualities of human life that have been enabled by our remarkable
natural history, the confluence of genes and culture, could end up as
a realm of freedom for an elite few.
_________________________________________________________________

OLIVER MORTON
Chief News and Features Editor at Nature; Author, Mapping Mars
[morton100.jpg]

Our planet is not in peril

The truth of this idea is pretty obvious. Environmental crises are a
fundamental part of the history of the earth: there have been sudden
and dramatic temperature excursions, severe glaciations, vast asteroid
and comet impacts. Yet the earth is still here, unscathed.

There have been mass extinctions associated with some of these events,
while other mass extinctions may well have been triggered by subtler
internal changes to the biosphere. But none of them seem to have done
long-term harm. The first ten million years of the Triassic may have
been a little dull by comparison to the late Palaeozoic, what with a
large number of the more interesting species being killed in the great
mass extinction at the end of the Permian, but there is no evidence
that any fundamentally important earth processes did not eventually
recover. I strongly suspect that not a single basic biogeochemical
innovation -- the sorts of thing that underlie photosynthesis and the
carbon cycle, the nitrogen cycle, the sulphur cycle and so on -- has
been lost in the past 4 billion years.

Indeed, there is an argument to be made that mass extinctions are in
fact a good thing, in that they wipe the slate clean a bit and thus
allow exciting evolutionary innovations. This may be going a bit far.
While the Schumpeter-for-the-earth-system position seems plausible, it
also seems a little crudely progressivist. While to a mammal the
Tertiary seems fairly obviously superior to the Cretaceous, it's not
completely clear to me that there's an objective basis for that
belief. In terms of primary productivity, for example, the Cretaceous
may well have had an edge. But despite all this, it's hard to imagine
that the world would be a substantially better place if it had not
undergone the mass extinctions of the Phanerozoic.

Against this background, the current carbon/climate crisis seems
pretty small beer. The change in mean global temperatures seems quite
unlikely to be much greater than the regular cyclical change between
glacial and interglacial climates. Land use change is immense, but
it's not clear how long it will last, and there are rich seedbanks in
the soil that will allow restoration. If fossil fuel use goes
unchecked, carbon dioxide levels may rise as high as they were in the
Eocene, and do so at such a rate that they cause a transient spike in
ocean acidity. But they will not stay at those high levels, and the
Eocene was not such a terrible place.

The earth doesn't need ice caps, or permafrost, or any particular sea
level. Such things come and go and rise and fall as a matter of
course. The planet's living systems adapt and flourish, sometimes in a
way that provides negative feedback, occasionally with a positive
feedback that amplifies the change. A planet that made it through the
massive biogeochemical unpleasantness of the late Permian is in little
danger from a doubling, or even a quintupling, of the very low carbon
dioxide level that preceded the industrial revolution, or from the
loss of a lot of forests and reefs, or from the demise of half its
species, or from the thinning of its ozone layer at high latitudes.

But none of this is to say that we as people should not worry about
global change; we should worry a lot. This is because climate change
may not hurt the planet, but it hurts people. In particular, it will
hurt people who are too poor to adapt. Significant climate change will
change rainfall patterns, and probably patterns of extreme events as
well, in ways that could easily threaten the food security of hundreds
of millions of people supporting themselves through subsistence
agriculture or pastoralism. It will have a massive effect on the lives
of the relatively small number of people in places where sea ice is an
important part of the environment (and it seems unlikely that anything
we do now can change that). In other, more densely populated places
local environmental and biotic change may have similarly sweeping
effects.

Secondary to this, the loss of species, both known and unknown, will
be experienced by some as a form of damage that goes beyond any
deterioration in ecosystem services. Many people will feel themselves
and their world diminished by such extinctions even when they have no
practical consequences, despite the fact that they cannot ascribe an
objective value to their loss. One does not have to share the values
of these people to recognise their sincerity.

All of these effects provide excellent reasons to act. And yet many
people in the various green movements feel compelled to add on the
notion that the planet itself is in crisis, or doomed; that all life
on earth is threatened. And in a world where that rhetoric is common,
the idea that this eschatological approach to the environment is
baseless is a dangerous one. Since the 1970s the environmental
movement has based much of its appeal on personifying the planet and
making it seem like a single entity, then seeking to place it in some
ways "in our care". It is a very powerful notion, and one which
benefits from the hugely influential iconographic backing of the first
pictures of the earth from space; it has inspired much of the good
that the environmental movement has done. The idea that the planet is
not in peril could thus come to undermine the movement's power. This
is one of the reasons people react against the idea so strongly. One
respected and respectable climate scientist reacted to Andy Revkin's
recent use of the phrase "In fact, the planet has nothing to worry
about from global warming" in the New York Times with near apoplectic
fury.

If the belief that the planet is in peril were merely wrong, there
might be an excuse for ignoring it, though basing one's actions on
lies is an unattractive proposition. But the planet-in-peril idea is
an easy target for those who, for various reasons, argue against any
action on the carbon/climate crisis at all. In this, bad science is a
hostage to fortune. What's worse, the idea distorts environmental
reasoning, too. For example, laying stress on the non-issue of the
health of the planet, rather than the real issues of effects that harm
people, leads to a general preference for averting change rather than
adapting to it, even though providing the wherewithal for adaptation
will often be the most rational response.

The planet-in-peril idea persists in part simply through widespread
ignorance of earth history. But some environmentalists, and perhaps
some environmental reporters, will argue that the inflated rhetoric
that trades on this error is necessary in order to keep the show on
the road. The idea that people can be more easily persuaded to save
the planet, which is not in danger, than their fellow human beings,
who are, is an unpleasant and cynical one; another dangerous idea, not
least because it may indeed hold some truth. But if putting the planet
at the centre of the debate is a way of involving everyone, of making
us feel that we're all in this together, then one can't help noticing
that the ploy isn't working out all that well. In the rich nations,
many people may indeed believe that the planet is in danger -- but
they don't believe that they are in danger, and perhaps as a result
they're not clamouring for change loud enough, or in the right way, to
bring it about.

There is also a problem of learned helplessness. I suspect people are
flattered, in a rather perverse way, by the idea that their lifestyle
threatens the whole planet, rather than just the livelihoods of
millions of people they have never met. But the same sense of scale
that flatters may also enfeeble. They may come to think that the
problems are too great for them to do anything about.

Rolling carbon/climate issues into the great moral imperative of
improving the lives of the poor, rather than relegating them to the
dodgy rhetorical level of a threat to the planet as a whole, seems
more likely to be a sustainable long-term strategy. The most important
thing about environmental change is that it hurts people; the basis of
our response should be human solidarity.

The planet will take care of itself.
_________________________________________________________________

SAMUEL BARONDES
Neurobiologist and Psychiatrist, University of California San
Francisco; Author, Better Than Prozac
[barondes100.jpg]

Using Medications To Change Personality

Personality -- the pattern of thoughts, feelings, and actions that is
typical of each of us -- is generally formed by early adulthood. But
many people still want to change. Some, for example, consider
themselves too gloomy and uptight and want to become more cheerful and
flexible. Whatever their aims they often turn to therapists, self-help
books, and religious practices.

In the past few decades certain psychiatric medications have become an
additional tool for those seeking control of their lives. Initially
designed to be used for a few months to treat episodic psychological
disturbances such as severe depression, they are now being widely
prescribed for indefinite use to produce sustained shifts in certain
personality traits. Prozac is the best known of them, but many others
are on the market or in development. By directly affecting brain
circuits that control emotions, these medications can produce
desirable effects that may be hard to replicate by sheer force of will
or by behavioral exercises. Millions keep taking them continuously,
year after year, to modulate personality.

Nevertheless, despite the testimonials and apparent successes, the
sustained use of such drugs to change personality should still be
considered dangerous. Not because manipulation of brain chemicals is
intrinsically cowardly, immoral, or a threat to the social order. In
the opinion of experienced clinicians medications such as Prozac may
actually have the opposite effect, helping to build character and to
increase personal responsibility. The real danger is that there are no
controlled studies of the effects of these drugs on personality over
the many years or even decades in which some people are taking them.
So we are left with a reliance on opinion and belief. And this, as in
all fields, we know to be dangerous.
_________________________________________________________________

DAVID BODANIS
Writer, Consultant; Author: The Electric Universe
[bodanis100.jpg]
The hyper-Islamicist critique of the West as a decadent force that is
already on a downhill course might be true

I wonder sometimes if the hyper-Islamicist critique of the West as a
decadent force that is already on a downhill course might be true. At
first it seems impossible: no one's richer than the US, and no one has
as powerful an Army; western Europe has vast wealth and university
skills as well.

But what got me reflecting was the fact that in just four years after
Pearl Harbor, the US had defeated two of the greatest military forces
the world had ever seen. Everyone naturally accepted there had to be
restrictions on gasoline sales, to preserve limited source of gasoline
and rubber; profiteers were hated. But the first four years after
9/11? Detroit automakers find it easy to continue paying off
congressmen to ensure that gasoline-wasting SUV's aren't restricted in
any way.

There are deep trends behind this. Technology is supposed to be
speeding up, but if you think about it, airplanes have a similar feel
and speed to ones of 30 years ago; cars and oil rigs and credit cards
and the operations of the NYSE might be a bit more efficient than a
few decades ago, but also don't feel fundamentally different. Aside
from the telephones, almost all the objects and and daily habits in
Spielberg's 20 year old film E.T. are about the same as today.

What has transformed is the possibility of quick change. It's a lot,
lot harder than it was before. Patents for vague, general ideas are
much easier to get than they were before, which slows down the
introduction of new technology; academics in biotech and other fields
are wary about sharing their latest research with potentially
competing colleagues (which slows down the creation of new technology
as well).

Even more, there's a tension, a fear of falling from the increasingly
fragile higher tiers of society, which means that social barriers are
higher as well. I went to adequate but not extraordinary public
(state) schools in Chicago, but my children go to private schools. I
suspect that many contributors to this site, unless they live in
academic towns where state schools are especially strong, are in a
similar position. This is fine for our children, but not for those of
the same theoretical potential, yet who lack parents who can afford
it.

Sheer inertia can mask such flaws for quite a while. The National
Academy of Sciences has shown that, once again, the percentage of
American-born university students studying the hard physical sciences
has gone down. At one time that didn't matter, for life in America --
and at the top American universities -- was an overwhelming lure for
ambitious youngsters from Seoul and Bangalore. But already there are
signs of that slipping, and who knows what it'll be like in another
decade or two.

There's another sort of inertia that's coming to an end as well. The
first generation of immigrants from farm to city bring with them the
attitudes of their farm world; the first generation of 'migrants' from
blue collar city neighborhoods to upper middle class professional life
bring similar attitudes of responsibility as well. We ignore what the
media pours out about how we're supposed to live. We're responsible
for parents, even when it's not to our economic advantage; we vote
against our short-term economic interests, because it's the 'right'
thing to do; we engage in philanthropy towards individuals of very
different backgrounds from ourselves. But why? In many parts of
America or Europe, the rules and habits creating those attitudes no
longer exist at all.

When that finally gets cut away, will what replaces it be strong
enough for us to survive?
_________________________________________________________________

NICHOLAS HUMPHREY
Psychologist, London School of Economics; Author, The Mind Made Flesh
[humphrey100.jpg]
It is undesirable to believe in a proposition when there is no ground
whatever for supposing it true

Bertrand Russell's idea, put forward 80 years ago, is about as
dangerous as they come. I don't think I can better it: "I wish to
propose for the reader's favourable consideration a doctrine which
may, I fear, appear wildly paradoxical and subversive. The doctrine in
question is this: that it is undesirable to believe in a proposition
when there is no ground whatever for supposing it true." (The opening
lines of his Sceptical essays).
_________________________________________________________________

ERIC FISCHL
Artist, New York City; Mary Boone Gallery
[fischl100.jpg]
The unknown becomes known, and is not replaced with a new unkown

Several years ago I stood in front of a painting by Vermeer. It was a
painting of a woman reading a letter. She stood near the window for
better lighting and behind her hung a map of the known world. I was
stunned by the revelation of this work. Vermeer understood something
so basic to human need it had gone virtually unnoticed: communication
from afar.

Everything we have done to make us more capable, more powerful, better
protected, more intelligent, has been by enhancing our physical
limitations, our perceptual abilities, our adaptability. When I think
of Vermeer's woman reading the letter I wonder how long did it take to
get to her? Then I think, my god, at some time we developed a system
in which one could leave home and send word back! We figured out a way
that we could be heard from far away and then another system so that
we can be seen from far away. Then I start to marvel at the alchemy of
painting and how we have been able to invest materials with
consciousness so that Vermeer can talk to me across time! I see too he
has put me in the position of not knowing as I am kept from reading
the content of the letter. In this way he has placed me at the edge,
the frontier of wanting to know what I cannot know. I want to know how
long has this letter sender been away and what was he doing all this
time. Is he safe? Does he still love her? Is he on his way home?

Vermeer puts me into what had been her condition of uncertainty. All I
can do is wonder and wait. This makes me think about how not knowing
is so important. Not knowing makes the world large and uncertain and
our survival tenuous. It is a mystery why humans roam and still more a
mystery why we still need to feel so connected to the place we have
left. The not knowing causes such profound anxiety it, in turn, spawns
creativity. The impetus for this creativity is empowerment. Our
gadgets, gizmoes, networks of transportation and communication, have
all been developed either to explore, utilize or master the unknown
territory.

If the unknown becomes known, and is not replaced with a new unknown,
if the farther we reach outward is connected only to how fast we can
bring it home, if the time between not knowing and knowing becomes too
small, creativity will be daunted. And so I worry, if we bring the
universe more completely, more effortlessly, into our homes will there
be less reason to leave them?
_________________________________________________________________

STANISLAS DEHEANE
Cognitive Neuropsychology Researcher, Institut National de la Santé,
Paris; Author, The Number Sense
[dehane100.jpg]

Touching and pushing the limits of the human brain

From Copernicus to Darwin to Freud, science has a special way of
deflating human hubris by proposing what is frequently perceived, at
the time, as dangerous or pernicious ideas. Today, cognitive
neuroscience presents us with a new challenging idea, whose
accommodation will require substantial personal and societal effort --
the discovery of the intrinsic limits of the human brain.

Calculation was one of the first domains where we lost our special
status -- right from their inception, computers were faster than the
human brain, and they are now billions of times ahead of us in their
speed and breadth of number crunching. Psychological research shows
that our mental "central executive" is amazingly limited -- we can
process only one thought at a time, at a meager rate of five or ten
per second at most. This is rather surprising. Isn't the human brain
supposed to be the most massively parallel machine on earth? Yes, but
its architecture is such that the collective outcome of this parallel
organization, our mind, is a very slow serial processor. What we can
become aware of is intrinsically limited. Whenever we delve deeply
into the processing of one object, we become literally blind to other
items that would require our attention (the "attentional blink"
paradigm). We also suffer from an "illusion of seeing": we think that
we take in a whole visual scene and see it all at once, but research
shows that major chunks of the image can be changed surreptitiously
without our noticing.
True, relative to other animal species, we do have a special
combinatorial power, which lies at the heart of the remarkable
cultural inventions of mathematics, language, or writing. Yet this
combinatorial faculty only works on the raw materials provided by a
small number of core systems for number, space, time, emotion,
conspecifics, and a few other basic domains. The list is not very long
-- and within each domain, we are now discovering lots of little
ill-adapted quirks, evidence of stupid design as expected from a brain
arising from an imperfect evolutionary process (for instance, our
number system only gives us a sense of approximate quantity -- good
enough for foraging, but not for exact mathematics). I therefore do
not share Marc Hauser's optimism that our mind has a "universal" or
"limitless" expressive power. The limits are easy to touch in
mathematics, in topology for instance, where we struggle with the
simplest objects (is a curve a knot... or not?).
As we discover the limits of the human brain, we also find new ways to
design machines that go beyond those limits. Thus, we have to get
ready for a society where, more and more, the human mind will be
replaced by better computers and robots -- and where the human
operator will be increasingly considered a nuisance rather than an
asset. This is already the case in aeronautics, where flight stability
is ensured by fast cybernetics and where landing and take off will
soon be assured by computer, apparently with much improved safety.
There are still a few domains where the human brain maintains an
apparent superiority. Visual recognition used to be one -- but
already, superb face recognition software is appearing, capable of
storing and recognizing thousands of faces with close to human
performance. Robotics is another. No robot to date is capable of
navigating smoothly through a complicated 3-D world. Yet a third area
of human superiority is high-level semantics and creativity: the human
ability to make sense of a story, to pull out the relevant knowledge
from a vast store of potentially useful facts, remains unequalled.
Suppose that, for the next 50 years, those are the main areas in which
engineers will remain unable to match the performance of the human
brain. Are we ready for a world in which the human contributions are
binary, either at the highest level (thinkers, engineers, artists...)
or at the lowest level, where human workforce remains cheaper than
mechanization? To some extent, I would argue that this great divide is
already here, especially between North and South, but also within our
developed countries, between upper and lower casts.
What are the solutions? I envisage two of them. The first is
education. The human brain to some extent is changeable. Thanks to
education, we can improve considerably upon the stock of mental tools
provided to us by evolution. In fact, relative to the large changes
that schooling can provide, whatever neurobiological differences
distinguish the sexes or the races are minuscule (and thus largely
irrelevant -- contra Steve Pinker). The crowning achievements of Sir
Isaac Newton are now accessible to any student in physics and algebra
-- whatever his or her skin color.
Of course, our learning ability isn't without bounds. It is itself
tightly limited by our genes, which merely allow a fringe of
variability in the laying down of our neuronal networks. We never
fully gain entirely new abilities -- but merely transform our existing
brain networks, a partial and constrained process that I have called
"cultural recycling" or "recyclage".
As we gain knowledge of brain plasticity, a major application of
cognitive neuroscience research should be the improvement of life-long
education, with the goal of optimizing this transformation of our
brains. Consider reading. We now understand much better how this
cultural capacity is laid down. A posterior brain network, initially
evolved to recognize objects and faces, gets partially recycled for
the shapes of letters and words, and learns to connect these shapes to
other temporal areas for sounds and words. Cultural evolution has
modified the shapes of letters so that they are easily learnable by
this brain network. But, the system remains amazingly imperfect.
Reading still has to go through the lopsided design of the retina,
where the blood vessels are put in front of the photoreceptors, and
where only a small region of the fovea has enough resolution to
recognize small print. Furthermore, both the design of writing systems
and the way in which they are taught are perfectible. In the end,
after years of training, we can only read at an appalling speed of
perhaps 10 words per second, a baud rate surpassed by any present-day
modem.
Nevertheless, this cultural invention has radically changed our
cognitive abilities, doubling our verbal working memory for instance.
Who knows what other cultural inventions might lie ahead of us, and
might allow us to further push the limits of our brain biology?
A second, more futuristic solution may lie in technology.
Brain-computer interfaces are already around the corner. They are
currently being developed for therapeutic purposes. Soon, cortical
implants will allow paralyzed patients to move equipment by direct
cerebral command. Will such devices later be applied to the normal
human brain, in the hopes of extending our memory span or the speed of
our access to information? And will we be able to forge a society in
which such tools do not lead to further divisions between, on the one
hand, high-tech brains powered by the best education and neuro-gear,
and on the other hand, low-tech man power just good enough for cheap
jobs?
_________________________________________________________________

JOEL GARREAU
Cultural Revolution Correspondent, Washington Post ; Author, Radical
Evolution
[garreau100.jpg]

Suppose Faulkner was right?
In his December 10, 1950, Nobel Prize acceptance speech, William
Faulkner said:



I decline to accept the end of man. It is easy enough to say that
man is immortal simply because he will endure: that when the last
ding-dong of doom has clanged and faded from the last worthless
rock hanging tideless in the last red and dying evening, that even
then there will still be one more sound: that of his puny
inexhaustible voice, still talking. I refuse to accept this. I
believe that man will not merely endure: he will prevail.
He is immortal, not because he alone among creatures has an
inexhaustible voice, but because he has a soul, a spirit capable of
compassion and sacrifice and endurance. The poet's, the writer's,
duty is to write about these things. It is his privilege to help
man endure by lifting his heart, by reminding him of the courasge
and honor and hope and pride and compassion and pity and sacrifice
which have been the glory of his past. The poet's voice need not
merely be the record of man, it can be one of the props, the
pillars to help him endure and prevail.

It's easy to dismiss such optimism. The reason I hope Faulkner was
right, however, is that we are at a turning point in history. For the
first time, our technologies are not so much aimed outward at
modifying our environment in the fashion of fire, clothes,
agriculture, cities and space travel. Instead, they are increasingly
aimed inward at modifying our minds, memories, metabolisms,
personalities and progeny. If we can do all that, then we are entering
an era of engineered evolution -- radical evolution, if you will -- in
which we take control of what it will mean to be human.
This is not some distant, science-fiction future. This is happening
right now, in our generation, on our watch. The GRIN technologies --
the genetic, robotic, information and nano processes -- are following
curves of accelerating technological change the arithmetic of which
suggests that the last 20 years are not a guide to the next 20 years.
We are more likely to see that magnitude of change in the next eight.
Similarly, the amount of change of the last half century, going back
to the time when Faulkner spoke, may well be compressed into the next
14.
This raises the question of where we will gain the wisdom to guide
this torrent, and points to what happens if Faulkner was wrong. If we
humans are not so much able to control our tools, but instead come to
be controlled by them, then we will be heading into a
technodeterminist future.

You can get different versions of what that might mean.

Some would have you believe that a future in which our creations
eliminate the ills that have plagued mankind for millennia --
conquering pain, suffering, stupidity, ignorance and even death -- is
a vision of heaven. Some even welcome the idea that someday soon, our
creations will surpass the pitiful limitations of Version 1.0 humans,
themselves becoming a successor race that will conquer the universe,
and care for us benevolently.
Others feel strongly that a life without suffering is a life without
meaning, reducing humankind to ignominious, character-less husks. They
also point to what could happen if such powerful self-replicating
technologies get into the hands of bumblers or madmen. They can easily
imagine a vision of hell in which we wipe out not only our species,
but all of life on earth.
If Faulkner is right, however, there is a third possible future. That
is the one that counts on the ragged human convoy of divergent
perceptions, piqued honor, posturing, insecurity and humor once again
wending its way to glory. It puts a shocking premium on Faulkner's
hope that man will prevail "because he has a soul, a spirit capable of
compassion and sacrifice and endurance." It assumes that even as
change picks up speed, giving us less and less time to react, we will
still be able to rely on the impulse that Churchill described when he
said, "Americans can always be counted on to do the right thing--after
they have exhausted all other possibilities."
The key measure of such a "prevail" scenario's success would be an
increasing intensity of links between humans, not transistors. If some
sort of transcendence is achieved beyond today's understanding of
human nature, it would not be through some individual becoming
superman. Transcendence would be social, not solitary. The measure
would be the extent to which many transform together.
The very fact that Faulkner's proposition looms so large as we look
into the future does at least illuminate the present.
Referring to Faulkner's breathtaking line, "when the last ding-dong of
doom has clanged and faded from the last worthless rock hanging
tideless in the last red and dying evening, that even then there will
still be one more sound: that of his puny inexhaustible voice, still
talking," the author Bruce Sterling once told me, "You know, the most
interesting part about that speech is that part right there, where
William Faulkner, of all people, is alluding to H. G. Wells and the
last journey of the Traveler from The Time Machine. It's kind of a
completely heartfelt, probably drunk mishmash of cornball
crypto-religious literary humanism and the stark, bonkers, apocalyptic
notions of atomic Armageddon, human extinction, and deep Darwinian
geological time. Man, that was the 20th century all over."
_________________________________________________________________

HELEN FISHER
Research Professor, Department of Anthropology, Rutgers University;
Author, Why We Love
[fisher100.jpg]

If patterns of human love subtlely change, all sorts of social and
political atrocities can escalate

Serotonin-enhancing antidepressants (such as Prozac and many others)
can jeopardize feelings of romantic love, feelings of attachment to a
spouse or partner, one's fertility and one's genetic future.

I am working with psychiatrist Andy Thomson on this topic. We base our
hypothesis on patient reports, fMRI studies, and other data on the
brain.
Foremost, as SSRIs elevate serotonin they also suppress dopaminergic
pathways in the brain. And because romantic love is associated with
elevated activity in dopaminergic pathways, it follows that SSRIs can
jeopardize feelings of intense romantic love. SSRIs also curb
obsessive thinking and blunt the emotions--central characteristics of
romantic love. One patient described this reaction well, writing:
"After two bouts of depression in 10 years, my therapist recommended I
stay on serotonin-enhancing antidepressants indefinitely. As
appreciative as I was to have regained my health, I found that my
usual enthusiasm for life was replaced with blandness. My romantic
feelings for my wife declined drastically. With the approval of my
therapist, I gradually discontinued my medication. My enthusiasm
returned and our romance is now as strong as ever. I am prepared to
deal with another bout of depression if need be, but in my case the
long-term side effects of antidepressants render them off limits".

SSRIs also suppress sexual desire, sexual arousal and orgasm in as
many as 73% of users. These sexual responses evolved to enhance
courtship, mating and parenting. Orgasm produces a flood of oxytocin
and vasopressin, chemicals associated with feelings of attachment and
pairbonding behaviors. Orgasm is also a device by which women assess
potential mates. Women do not reach orgasm with every coupling and the
"fickle" female orgasm is now regarded as an adaptive mechanism by
which women distinguish males who are willing to expend time and
energy to satisfy them. The onset of female anorgasmia may jeopardize
the stability of a long-term mateship as well.

Men who take serotonin-enhancing antidepressants also inhibit evolved
mechanisms for mate selection, partnership formation and marital
stability. The penis stimulates to give pleasure and advertise the
male's psychological and physical fitness; it also deposits seminal
fluid in the vaginal canal, fluid that contains dopamine, oxytocin,
vasopressin, testosterone, estrogen and other chemicals that most
likely influence a female partner's behavior.

These medications can also influence one's genetic future. Serotonin
increases prolactin by stimulating prolactin releasing factors.
Prolactin can impair fertility by suppressing hypothalamic GnRH
release, suppressing pituitary FSH and LH release, and/or suppressing
ovarian hormone production. Clomipramine, a strong serotonin-enhancing
antidepressant, adversely affects sperm volume and motility.

I believe that Homo sapiens has evolved (at least) three primary,
distinct yet overlapping neural systems for reproduction. The sex
drive evolved to motivate ancestral men and women to seek sexual union
with a range of partners; romantic love evolved to enable them to
focus their courtship energy on a preferred mate, thereby conserving
mating time and energy; attachment evolved to enable them to rear a
child through infancy together. The complex and dynamic interactions
between these three brain systems suggest that any medication that
changes their chemical checks and balances is likely to alter an
individual's courting, mating and parenting tactics, ultimately
affecting their fertility and genetic future.

The reason this is a dangerous idea is that the huge drug industry is
heavily invested in selling these drugs; millions of people currently
take these medications worldwide; and as these drugs become generic,
many more will soon imbibe -- inhibiting their ability to fall in love
and stay in love. And if patterns of human love subtlely change, all
sorts of social and political atrocities can escalate.
_________________________________________________________________

PAUL DAVIES
Physicist, Macquarie University, Sydney; Author, How to Build a Time
Machine
[davies100.jpg]

The fight against global warming is lost

Some countries, including the United States and Australia, have been
in denial about global warming. They cast doubt on the science that
set alarm bells ringing. Other countries, such as the UK, are in
panic, and want to make drastic cuts in greenhouse emissions. Both
stances are irrelevant, because the fight is a hopeless one anyway. In
spite of the recent hike in the price of oil, the stuff is still cheap
enough to burn. Human nature being what it is, people will go on
burning it until it starts running out and simple economics puts the
brakes on. Meanwhile the carbon dioxide levels in the atmosphere will
just go on rising. Even if developed countries rein in their
profligate use of fossil fuels, the emerging Asian giants of China and
India will more than make up the difference. Rich countries, whose own
wealth derives from decades of cheap energy, can hardly preach
restraint to developing nations trying to climb the wealth ladder. And
without the obvious solution -- massive investment in nuclear energy
-- continued warming looks unstoppable.

Campaigners for cutting greenhouse emissions try to scare us by
proclaiming that a warmer world is a worse world. My dangerous idea is
that it probably won't be. Some bad things will happen. For example,
the sea level will rise, drowning some heavily populated or fertile
coastal areas. But in compensation Siberia may become the world's
breadbasket. Some deserts may expand, but others may shrink. Some
places will get drier, others wetter. The evidence that the world will
be worse off overall is flimsy. What is certainly the case is that we
will have to adjust, and adjustment is always painful. Populations
will have to move. In 200 years some currently densely populated
regions may be deserted. But the population movements over the past
200 years have been dramatic too. I doubt if anything more drastic
will be necessary. Once it dawns on people that, yes, the world really
is warming up and that, no, it doesn't imply Armageddon, then the
international agreements like the Kyoto protocol will fall apart.

The idea of giving up the global warming struggle is dangerous because
it shouldn't have come to this. Mankind does have the resources and
the technology to cut greenhouse gas emission. What we lack is the
political will. People pay lip service to environmental
responsibility, but they are rarely prepared to put their money where
their mouth is. Global warming may turn out to be not so bad after
all, but many other acts of environmental vandalism are manifestly
reckless: the depletion of the ozone layer, the destruction of rain
forests, the pollution of the oceans. Giving up on global warming will
set an ugly precedent.
_________________________________________________________________

APRIL GORNIK
Artist, New York City; Danese Gallery
[gornik100.jpg]
The exact effect of art can't be controlled or fully anticipated

Great art makes itself vulnerable to interpretation, which is one
reason that it keeps being stimulating and fascinating for
generations. The problem inherent in this is that art could inspire
malevolent behavior, as per the notion popularly expressed by A
Clockwork Orange. When I was young, aspiring to be a conceptual
artist, it disturbed me greatly that I couldn't control the
interpretation of my work. When I began painting, it was even worse;
even I wasn't completely sure of what my art meant. That seemed
dangerous for me, personally, at that time. I gradually came not only
to respect the complexity and inscrutability of painting and art, but
to see how it empowers the object. I believe that works of art are
animated by their creators, and remain able to generate thoughts,
feelings, responses. However, the fact is that the exact effect of art
can't be controlled or fully anticipated.
_________________________________________________________________

JAMSHED BHARUCHA

Professor of Psychology, Provost, Senior Vice President, Tufts
University
[bharucha100.jpg]

The more we discover about cognition and the brain, the more we will
realize that education as we know it does not accomplish what we
believe it does

It is not my purpose to echo familiar critiques of our schools. My
concerns are of a different nature and apply to the full spectrum of
education, including our institutions of higher education, which
arguably are the finest in the world.

Our understanding of the intersection between genetics and
neuroscience (and their behavioral correlates) is still in its
infancy. This century will bring forth an explosion of new knowledge
on the genetic and environmental determinants of cognition and brain
development, on what and how we learn, on the neural basis of human
interaction in social and political contexts, and on variability
across people.

Are we prepared to transform our educational institutions if new
science challenges cherished notions of what and how we learn? As we
acquire the ability to trace genetic and environmental influences on
the development of the brain, will we as a society be able to agree on
what our educational objectives should be?

Since the advent of scientific psychology we have learned a lot about
learning. In the years ahead we will learn a lot more that will
continue to challenge our current assumptions. We will learn that some
things we currently assume are learnable are not (and vice versa),
that some things that are learned successfully don't have the impact
on future thinking and behavior that we imagine, and that some of the
learning that impacts future thinking and behavior is not what we
spend time teaching. We might well discover that the developmental
time course for optimal learning from infancy through the life span is
not reflected in the standard educational time line around which
society is organized. As we discover more about the gulf between how
we learn and how we teach, hopefully we will also discover ways to
redesign our systems -- but I suspect that the latter will lag behind
the former.

Our institutions of education certify the mastery of spheres of
knowledge valued by society. Several questions will become
increasingly pressing, and are even pertinent today. How much of this
learning persists beyond the time at which acquisition is certified?
How does this learning impact the lives of our students? How central
is it in shaping the thinking and behavior we would like to see among
educated people as they navigate, negotiate and lead in an
increasingly complex world?

We know that tests and admissions processes are selection devices that
sort people into cohorts on the basis of excellence on various
dimensions. We know less about how much even our finest examples of
teaching contribute to human development over and above selection and
motivation.

Even current knowledge about cognition (specifically, our
understanding of active learning, memory, attention, and implicit
learning) has not fully penetrated our educational practices, because
of inertia as well as a natural lag in the application of basic
research. For example, educators recognize that active learning is
superior to the passive transmission of knowledge. Yet we have a long
way to go to adapt our educational practices to what we already know
about active learning.

We know from research on memory that learning trials bunched up in
time produce less long term retention than the same learning trials
spread over time. Yet we compress learning into discrete packets
called courses, we test learning at the end of a course of study, and
then we move on. Furthermore, memory for both facts and methods of
analytic reasoning are context-dependent. We don't know how much of
this learning endures, how well it transfers to contexts different
from the ones in which the learning occurred, or how it influences
future thinking.

At any given time we attend to only a tiny subset of the information
in our brains or impinging on our senses. We know from research on
attention that information is processed differently by the brain
depending upon whether or not it is attended, and that many factors --
endogenous and exogenous -- control our attention. Educators have been
aware of the role of attention in learning, but we are still far from
understanding how to incorporate this knowledge into educational
design. Moreover, new information presented in a learning situation is
interpreted and encoded in terms of prior knowledge and experience;
the increasingly diverse backgrounds of students placed in the same
learning contexts implies that the same information may vary in its
meaningfulness to different students and may be recalled differently.

Most of our learning is implicit, acquired automatically and
unconsciously from interactions with the physical and social
environment. Yet language -- and hence explicit, declarative or
consciously articulated knowledge -- is the currency of formal
education.

Social psychologists know that what we say about why we think and act
as we do is but the tip of a largely unconscious iceberg that drives
our attitudes and our behavior. Even as cognitive and social
neuroscience reveals the structure of these icebergs under the surface
of consciousness (for example, persistent cognitive illusions,
decision biases and perceptual biases to which even the best educated
can be unwitting victims), it will be less clear how to shape or
redirect these knowledge icebergs under the surface of consciousness.

Research in social cognition shows clearly that racial, cultural and
other social biases get encoded automatically by internalizing
stereotypes and cultural norms. While we might learn about this
research in college, we aren't sure how to counteract these factors in
the very minds that have acquired this knowledge.

We are well aware of the power of non-verbal auditory and visual
information, which when amplified by electronic media capture the
attention of our students and sway millions. Future research should
give us a better understanding of nuanced non-verbal forms of
communication, including their universal and culturally based aspects,
as they are manifest in social, political and artistic contexts.

Even the acquisition of declarative knowledge through language -- the
traditional domain of education -- is being usurped by the internet at
our finger tips. Our university libraries and publication models are
responding to the opportunities and challenges of the information age.
But we will need to rethink some of our methods of instruction too.
Will our efforts at teaching be drowned out by information from
sources more powerful than even the best classroom teacher?

It is only a matter of time before we have brain-related technologies
that can alter or supplement cognition, influence what and how we
learn, and increase competition for our limited attention. Imagine the
challenges for institutions of education in an environment in which
these technologies are readily available, for better or worse.

The brain is a complex organ, and we will discover more of this
complexity. Our physical, social and information environments are also
complex and are becoming more so through globalization and advances in
technology. There will be no simple design principles for how we
structure education in response to these complexities.

As elite colleges and universities, we see increasing demand for the
branding we confer, but we will also see greater scrutiny from society
for the education we deliver. Those of us in positions of academic
leadership will need wisdom and courage to examine, transform and
justify our objectives and methods as educators.
_________________________________________________________________

JORDAN POLLACK

Computer Scientist, Brandeis University
[pollack100.jpg]

Science as just another Religion

We scientists like to think that our "way of knowing" is special.
Instead of holding beliefs based on faith in invisible omniscient
deities, or parchments transcribed from oral cultures, we use the
scientific method to discover and know. Truth may be eternal, but
human knowledge of that truth evolves over time, as new questions are
asked, data is recorded, hypotheses are tested, and replication and
refutation mechanisms correct the record.

So it is a very dangerous idea to consider Science as just another
Religion. It's not my idea, but one I noticed growing in a set of
Lakovian Frames within the Memesphere.

One of the frame is that scientists are doom and gloom prophets. For
example, at a recent popular technology conference, a parade of
speakers spoke about the threats of global warming, the sea level
rising by 18 feet and destroying cities, more category 5 hurricanes,
etc. It was quite a reversal from the positivistic techno-utopian
promises of miraculous advances in medicine, computers, and weaponry
that have allowed science to bloom in the late 20th century. A friend
pointed out that -- in the days before Powerpoint -- these scientists
might be wearing sandwich-board signs saying "The End is Near!"

Another element in the framing of science as a religion is the
response to evidence-based policy. Scientists who do take political
stands on "moral" issues such as stem-cell research, death penalty,
nuclear weapons, global warming, etc., can be sidelined as atheists,
humanists, or agnostics who have no moral or ethical standing outside
their narrow specialty (as compared to, say, televangelist preachers.)

A third, and the most nefarious frame, casts theory as one opinion
among others which should represented out of fairness or tolerance.
This is the subterfuge used by Intelligent Design Creationists.

We may believe in the separation of church and state, but that
firewall has fallen. Science and Reason are losing political battles
to Superstition and Ignorance. Politics works by rewarding friends and
punishing enemies, and while our individual votes may be private, exit
polls have proven that Science didn't vote for the incumbent.

There seem to be three choices going forward: Reject, Accommodate, or
Embrace.

One path is to go on an attack on religion in the public sphere. In
his book End of
Faith, Sam Harris points out that humoring people who believe in God
is like humoring people who believe that "a diamond [] the size of a
refrigerator" is buried in their back yard. There is a fine line
between pushing God out of our public institutions and repeating
religious intolerance of regimes past.

A second is to embrace Faith-Based Science. Since, from the
perspective of government, research just another special interest
feeding at the public trough, we should change our model to be more
accommodating to political reality. Research is already sold like
highway construction projects, with a linear accelerator for your
state and a supercomputer center for mine, all done through direct
appropriations. All that needs to change is the justifications for
such spending.

How would Faith-Based Science work? Well, Physics could sing the psalm
that Perpetual Motion would solve the energy crisis, thereby
triggering a $500 billion program in free energy machines. (Of course,
God is on our side to repeal the Second Law of Thermodynamics!)
Astronomy could embrace Astrology and do grassroots PR through Daily
Horoscopes to gain mass support for a new space program. In fact, an
anti-gravity initiative could pass today if it were spun as a repeal
of the "heaviness tax." Using the renaming principle, the SETI program
can be re-legalized and brought back to life as the "Search for God"
project.

Finally, the third idea is to actually embrace this dangerous idea and
organize a new open-source spiritual and moral movement. I think a
new, greener religion, based on faith in the Gaia Hypothesis and an
11th commandment to "Protect the Earth" could catch on, especially if
welcoming to existing communities of faith. Such a movement could be a
new pulpit from which the evidence-based silent majority can speak
with both moral force and evangelical fervor about issues critical to
the future of our planet.
_________________________________________________________________

JUAN ENRIQUEZ
CEO, Biotechonomy; Founding Director, Harvard Business School's Life
Sciences Project; Author, The Untied States of America
[enriquez100jpg]

Technology can untie the U.S.

Everyone grows and dies; same is true of countries. The only question
is how long one postpones the inevitable. In the case of some
countries, life spans can be very long, so it is worth asking is the
U.S. in adolescence, middle age, or old age? Do science and technology
accelerate or offset demise? And finally "how many stars will be in
the U.S. flag in fifty years?"

There has yet to be a single U.S. president buried under the same flag
he was born under, yet we oft take continuity for granted. Just as
almost no newlyweds expect to divorce, citizens rarely assume their
beloved country, flag and anthem might end up an exhibit in an
archeology museum. But countries rich and poor, Asian, African, and
European have been untying time and again. In the last five decades
the number of UN members has tripled. This trend goes way beyond the
de-colonization of the 1960s, and it is not exclusive to failed
states; it is a daily debate within the United Kingdom, Italy, France,
Belgium, the Netherlands, Austria, and many others.

So far the Americas has remained mostly impervious to these global
trends, but, even if in God you trust, there are no guarantees. Over
the next decade waves of technology will wash over the U.S. Almost any
applied field you care to look at promises extraordinary change,
opportunities, and challenges. (Witness the entries in this edition of
Edge). How counties adapt to massive, rapid upheaval will go a long
way towards determining the eventual outcome. To paraphrase Darwin, it
is not the strongest, not the largest, that survive rather it is those
best prepared to cope with change.

It is easy to argue that the U.S. could be a larger more powerful
country in fifty years. But it is also possible that, like so many
other great powers, it could begin to unravel and untie. This is not
something that depends on what we do decide to do fifty years hence;
to a great extent it depends on what we choose to do, or choose to
ignore, today. There are more than a few worrisome trends.

Future ability to generate wealth depends on techno-literacy. But
educational excellence, particularly in grammar and high schools is
far from uniform, and it is not world class. Time and again the U.S.
does poorly, particularly in regards to math and science, when
compared with its major trading partners. Internally, there are
enormous disparities between schools and between the number of
students that pass state competency exams and what federal tests tell
us about the same students. There are also large gaps in techno
literacy between ethnic groups. By 2050 close to 40% of the U.S.
population will be Hispanic and African American. These groups receive
3% of the PhDs in math and science today. How we prepare kids for a
life sciences, materials, robotics, IT, and nanotechnology driven
world is critical. But we currently invest $22,000 federal dollars in
those over 65 and just over $2,000 in those under sixteen...

As ethnic, age, and regional gaps in the ability to adapt increase
there are many wary and frustrated by technology, open borders, free
trade, and smart immigrants. Historically, when others use newfangled
ways to leap ahead, it can lead to a conservative response. This is
likeliest within those societies and groups thant have the most to
lose, often among those who have been the most successful. One often
observes a reflexive response: stop the train; I want to get off. Or,
as the Red Sox now say, just wait till last year. No more teaching
evolution, no more research into stem cells, no more Indian or Chinese
or Mexican immigrants, no matter how smart or hardworking they might
be. These individual battles are signs of a creeping xenophobia,
isolationism, and fury.

Within the U.S. there are many who are adapting very successfully.
They tend to concentrate in a very few zip codes, life science
clusters like 92121(between Salk, Scripps, and UCSD) and
techno-empires like 02139 (MIT). Most of the nation's wealth and taxes
are generated by a few states and, within these states, within in a
few square miles. It is those who live in these areas that are most
affronted by restrictions on research, the lack of science literate
teenagers, and the reliance on God instead of science.

Politicians well understand these divides and they have gerrymandered
their own districts to reflect them. Because competitive congressional
elections are rarer today than turnovers within the Soviet Politburo,
there is rarely an open debate and discussion as to why other parts of
the country act and think so differently. The Internet and cable
further narrowcast news and views, tending to reinforce what one's
neighbors and communities already believe. Positions harden. Anger at
"the others" mounts.

Add a large and mounting debt to this equation, along with politicized
religion, and the mixture becomes explosive. The average household now
owes over $88,000 and the present value of what we have promised to
pay is now about $473,000. There is little willingness within
Washington to address a mounting deficit, never mind the current
account imbalance. Facing the next electoral challenge, few seem to
remember the last act of many an empire is to drive itself into
bankruptcy.

Sooner or later we could witness some very bitter arguments about who
gets and who pays. In developed country after developed country, it is
often the richest, not the ethnically or religiously repressed, that
first seek autonomy and eventually dissolution. In this context it is
worth recalling that New England, not the South, has been the most
secession prone region. As the country expanded, New Englanders
attempted to include the right to untie into the constitution; the
argument was that as this great country expanded South and West they
would lose control over their political and economic destiny. Perhaps
this is what led to four separate attempts to untie the Union.

When we assume stability and continuity we can wake up to
irreconcilable differences. Science and a knowledge driven economy can
allow a few folks to build powerful and successful countries very
quickly, witness Korea, Taiwan, Singapore, Ireland, but changes of
this magnitude can also bury or split the formerly great who refuse to
adapt, as well as those who practice bad governance. If we do not
begin to address some current divides quickly we could live to see an
Un-Tied States of America.
_________________________________________________________________

STEPHEN M. KOSSLYN
Psychologist, Harvard University; Author, Wet Mind
[kosslyn100.jpg]

A Science of the Divine?

Here's an idea that many academics may find unsettling and dangerous:
God exists. And here's another idea that many religious people may
find unsettling and dangerous: God is not supernatural, but rather
part of the natural order. Simply stating these ideas in the same
breath invites them to scrape against each other, and sparks begin to
fly. To avoid such conflict, Stephen Jay Gould famously argued that we
should separate religion and science, treating them as distinct
"magisteria." But science leads many of us to try to understand all
that we encounter with a single, grand and glorious overarching
framework. In this spirit, let me try to suggest one way in which the
idea of a "supreme being" can fit into a scientific worldview.

I offer the following not to advocate the ideas, but rather simply to
illustrate one (certainly not the only) way that the concept of God
can be approached scientifically.

1.0. First, here's the specific conception of God I want to explore:
God is a "supreme being" that transcends space and time, permeates our
world but also stands outside of it, and can intervene in our daily
lives (partly in response to prayer).

2.0. A way to begin to think about this conception of the divine rests
on three ideas:

2.1. Emergent properties. There are many examples in science where
aggregates produce an entity that has properties that cannot be
predicted entirely from the elements themselves. For example, neurons
in large numbers produce minds; moreover, minds in large numbers
produce economic, political, and social systems.
2.2. Downward causality. Events at "higher levels" (where emergent
properties become evident) can in turn feed back and affect events at
lower levels. For example, chronic stress (a mental event) can cause
parts of the brain to become smaller. Similarly, an economic
depression or the results of an election affect the lives of the
individuals who live in that society.
2.3. The Ultimate Superset. The Ultimate Superset (superordinate set)
of all living things may have an equivalent status to an economy or
culture. It has properties that emerge from the interactions of living
things and groups of living things, and in turn can feed back to
affect those things and groups.

3.0. Can we conceive of God as an emergent property of all living
things that can in turn affect its constituents? Here are some ways in
which this idea is consistent with the nature of God, as outlined at
the outset.

3.1. This emergent entity is "transcendent" in the sense that it
exists in no specific place or time. Like a culture or an economy, God
is nowhere, although the constituent elements occupy specific places.
As for transcending time, consider this analogy: Imagine that 1/100th
of the neurons in your brain were replaced every hour, and each old
neuron programmed a new one so that the old one's functionality was
preserved. After 100 hours your brain would be an entirely new organ
-- but your mind would continue to exist as it had been before.
Similarly, as each citizen dies and is replaced by a child, the
culture continues to exist (and can grow and develop, with a "life of
its own"). So too with God. For example, in the story of Jacob's
ladder, Jacob realizes "Surely the Lord is in this place, and I did
not know it." (Genesis 28: 16) I interpret this story as illustrating
that God is everywhere but nowhere. The Ultimate Superset permeates
our world but also stands outside of (or, more specifically, "above")
it.
3.2. The Ultimate Superset can affect our individual lives. Another
analogy: Say that geese flying south for the winter have rather
unreliable magnetic field detectors in their brains. However, there's
a rule built into their brains that leads them to try to stay near
their fellows as they fly. The flock as a whole would navigate far
better than any individual bird, because the noise in the individual
bird brain navigation systems would cancel out. The emergent entity --
the flock -- in turn would affect the individual geese, helping them
to navigate better than they could on their own.
3.3. When people pray to the Lord, they beseech intervention on their
or others' behalf. The view that I've been outlining invites us to
think of the effects of prayer as akin to becoming more sensitive to
the need to stay close to the other birds in the flock: By praying,
one can become more sensitive to the emergent "supreme being." Such
increased sensitivity may imply that one can contribute more strongly
to this emergent entity.

By analogy, it's as if one of those geese became aware of the "keep
near" rule, and decided to nudge the other birds in a particular
direction -- which thereby allows it to influence the flock's effect
on itself. To the extent that prayer puts one closer to God, one's
plea for intervention will have a larger impact on the way that The
Ultimate Superset exerts downward causality. But note that, according
to this view, God works rather slowly. Think of dropping rocks in a
pond: it takes time for the ripples to propagate and eventually be
reflected back from the edge, forming interference patterns in the
center of the pond.

4.0. A crucial idea in monotheistic religions is that God is the
Creator. The present approach may help us begin to grapple with this
idea, as follows.

4.1. First, consider each individual person. The environment plays a
key role in creating who and what we are because there are far too few
genes to program every aspect of our brains. For example, when you
were born, your genes programmed many connections in your visual
areas, but did not specify the precise circuits necessary to determine
how far away objects are. As an infant, the act of reaching for an
object tuned the brain circuits that estimate how far away the object
was from you.

Similarly, your genes graced you with the ability to acquire language,
but not with a specific language. The act of acquiring a language
shapes your brain (which in turn may make it difficult to acquire
another language, with different sounds and grammar, later in life).
Moreover, cultural practices configure the brains of members of the
culture. A case in point: the Japanese have many forms of bowing,
which are difficult for a Westerner to master relatively late in life;
when we try to bow, we "bow with an accent."
4.2. And the environment not only played an essential role in how we
developed as children, but also plays a continuing role in how we
develop over the course of our lives as adults. The act of learning
literally changes who and what we are.
4.3. According to this perspective, it's not just negotiating the
physical world and sociocultural experience that shape the brain: The
Ultimate Superset -- the emergent property of all living things --
affects all of the influences that "make us who and what we are," both
as we develop during childhood and continue to learn and develop as
adults.
4.4. Next, consider our species. One could try to push this
perspective into a historical context, and note that evolution by
natural selection reflects the effects of interactions among living
things. If so, then the emergent properties of such interactions could
feed back to affect the course of evolution itself.

In short, it is possible to begin to view the divine through the lens
of science. But such reasoning does no more than set the stage; to be
a truly dangerous idea, this sort of proposal must be buttressed by
the results of empirical test. At present, my point is not to
convince, but rather to intrigue. As much as I admired Stephen Jay
Gould (and I did, very much), perhaps he missed the mark on this one.
Perhaps there is a grand project waiting to be launched, to integrate
the two great sources of knowledge and belief in the world today --
science and religion.
_________________________________________________________________

JERRY COYNE
Evolutionary Biologist; Professor, Department of Ecology and
Evolution, University of Chicago; Author (with H. Allen Orr),
Speciation
[coyne100.jpg]

Many behaviors of modern humans were genetically hard-wired (or
soft-wired) in our distant ancestors by natural selection

For me, one idea that is dangerous and possibly true is an extreme
form of evolutionary psychology -- the view that many behaviors of
modern humans were genetically hard-wired (or soft-wired) in our
distant ancestors by natural selection.
The reason I say that this idea might be true is that we cannot be
sure of the genetic and evolutionary underpinnings of most human
behaviors. It is difficult or impossible to test many of the
conjectures of evolutionary psychology. Thus, we can say only that
behaviors such as the sexual predilections of men versus women, and
the extreme competitiveness of males, are consistent with evolutionary
psychology.

But consistency arguments have two problems. First, they are not hard
scientific proof. Are we satisfied that sonnets are phallic extensions
simply because some male poets might have used them to lure females?
Such arguments fail to meet the normal standards of scientific
evidence.

Second, as is well known, one can make consistency arguments for
virtually every human behavior. Given the possibilities of kin
selection (natural selection for behaviors that do no good for to
their performers but are advantageous to their relatives) and
reciprocal altruism, and our ignorance of the environments of our
ancestors, there is no trait beyond evolutionary explanation. Indeed,
there are claims for the evolutionary origin of even manifestly
maladaptive behaviors, such as homosexuality, priestly celibacy, and
extreme forms of altruism (e.g., self-sacrifice during wartime). But
surely we cannot consider it scientifically proven that genes for
homosexuality are maintained in human populations by kin selection.
This remains possible but undemonstrated.

Nevertheless, much of human behavior does seem to conform to Darwinian
expectations. Males are promiscuous and females coy. We treat our
relatives better than we do other people. The problem is where to draw
the line between those behaviors that are so obviously adaptive that
no one doubts their genesis (e.g. sleeping and eating), those which
are probably but not as obviously adaptive (e.g., human sexual
behavior and our fondness for fats and sweets) and those whose
adaptive basis is highly speculative (e.g., the origin of art and our
love of the outdoors).

Although I have been highly critical of evolutionary psychology, I
have not done so from political motives, nor do I think that the
discipline is in principle misguided. Rather, I have been critical
because evolutionary psychologists seem unwilling to draw lines
between what can be taken as demonstrated and what remains
speculative, making the discipline more of a faith than a science.
This lack of rigor endangers the reputation of all of evolutionary
biology, making our endeavors seem to be merely the concoction of
ingenious stories. If we are truly to understand human nature, and use
this knowledge constructively, we must distinguish the probably true
from the possibly true.

So, why do I see evolutionary psychology as dangerous? I think it is
because I am afraid to see myself and my fellow humans as mere
marionettes dancing on genetic strings. I would like to think that we
have immense freedom to better ourselves as individuals and to create
a just and egalitarian society. Granted, genetics is not destiny, but
neither are we completely free of our evolutionary baggage. Might
genetics really hold a leash on our capacity to change? If so, then
some claims of evolutionary psychology give us convenient but
dangerous excuses for behaviors that seem unacceptable. It is all too
easy, for example, for philandering males to excuse their behavior as
evolutionarily justified. Evolutionary psychologists argue that it is
possible to overcome our evolutionary heritage. But what if it is not
so easy to take the Dawkinsian road and "rebel against the tyranny of
the selfish replicators"?
_________________________________________________________________

ERNST PÖPPEL
Neuroscientist, Chairman, Board of Directors, Human Science Center and
Department of Medical Psychology, Munich University, Germany; Author,
Mindworks
[poppel100.jpg]

My belief in science

Average life expectancy of a species on this globe is just a few
million years. From an external point of view, it would be nothing
special if humankind suddenly disappears. We have been here for
sometime. With humans no longer around, evolutionary processes would
have an even better chance to fill in all those ecological niches
which have been created by human activities. As we change the world,
and as thousands of species are lost every year because of human
activities, we provide a new and productive environment for the
creation of new species. Thus, humankind is very creative with respect
to providing a frame for new evolutionary trajectories, and humankind
would even be more creative, if it has disappeared altogether. If
somebody (unfortunately not our descendents) would visit this globe
some time later, they would meet many new species, which owe their
existence the presence and the disappearance of humankind.

But this is not going to happen, because we are doing science. With
science we apparently get a better understanding of basic principles
in nature, we have a chance to improve quality of life, and we can
develop means to extend the life expectancy of our species.
Unfortunately, some of these scientific activities have a paradoxical
effect resulting in a higher risk for a common disappearance. Maybe,
science will not be so effective after all to prevent our
disappearance.

Only now comes my dangerous idea as my (!) dangerous idea. It is not
so difficult to come up with a dangerous scenario on a general level,
but if one takes such a question also seriously on a personal level,
one has to meditate an individual scenario. I am very grateful for
this question formulated by Steven Pinker as it forced me to visit my
episodic memory and to think about what has been and still is "my
dangerous idea". Although nobody else might be interested in a
personal statement, I say it anyway: My dangerous idea is my belief in
science.

In all my research (in the field of temporal perception or visual
processes) I have a basic trust in the scientific activities, and I
actually believe the results I have obtained. And I believe the
results of others. But why? I know that there so many unknown and
unknowable variables that are part of the experimental setup and which
cannot be controlled. How can I trust in spite of so many unknowables
(does this word exist in English?)? Furthermore, can I really rely on
my thinking, can I trust my eyes and ears? Can I be so sure about my
scientific activities that I communicate with pride the results to
others? If I look at the complexity of the brain, how is it possible
that something reasonable comes out of this network? How is it
possible that a face that I see or a thought that I have maintain
their identity over time? If I have no access to what goes on in my
brain, how can I be so proud, (how can anybody be so proud) about
scientific achievements?
_________________________________________________________________

GEOFFREY MILLER
Evolutionary Psychologist, University of New Mexico; Author, The
Mating Mind
[miller100.jpg]

Runaway consumerism explains the Fermi Paradox

The story goes like this: Sometime in the 1940s, Enrico Fermi was
talking about the possibility of extra-terrestrial intelligence with
some other physicists. They were impressed that our galaxy holds 100
billion stars, that life evolved quickly and progressively on earth,
and that an intelligent, exponentially-reproducing species could
colonize the galaxy in just a few million years. They reasoned that
extra-terrestrial intelligence should be common by now. Fermi listened
patiently, then asked simply, "So, where is everybody?". That is, if
extra-terrestrial intelligence is common, why haven't we met any
bright aliens yet? This conundrum became known as Fermi's Paradox.

The paradox has become more ever more baffling. Over 150 extrasolar
planets have been identified in the last few years, suggesting that
life-hospitable planets orbit most stars. Paleontology shows that
organic life evolved very quickly after earth's surface cooled and
became life-hospitable. Given simple life, evolution shows progressive
trends towards larger bodies, brains, and social complexity.
Evolutionary psychology reveals several credible paths from simpler
social minds to human-level creative intelligence. Yet 40 years of
intensive searching for extra-terrestrial intelligence have yielded
nothing. No radio signals, no credible spacecraft sightings, no close
encounters of any kind.

So, it looks as if there are two possibilities. Perhaps our science
over-estimates the likelihood of extra-terrestrial intelligence
evolving. Or, perhaps evolved technical intelligence has some deep
tendency to be self-limiting, even self-exterminating. After
Hiroshima, some suggested that any aliens bright enough to make
colonizing space-ships would be bright enough to make thermonuclear
bombs, and would use them on each other sooner or later. Perhaps
extra-terrestrial intelligence always blows itself up. Fermi's Paradox
became, for a while, a cautionary tale about Cold War geopolitics.

I suggest a different, even darker solution to Fermi's Paradox.
Basically, I think the aliens don't blow themselves up; they just get
addicted to computer games. They forget to send radio signals or
colonize space because they're too busy with runaway consumerism and
virtual-reality narcissism. They don't need Sentinels to enslave them
in a Matrix; they do it to themselves, just as we are doing today.

The fundamental problem is that any evolved mind must pay attention to
indirect cues of biological fitness, rather than tracking fitness
itself. We don't seek reproductive success directly; we seek tasty
foods that tended to promote survival and luscious mates who tended to
produce bright, healthy babies. Modern results: fast food and
pornography. Technology is fairly good at controlling external reality
to promote our real biological fitness, but it's even better at
delivering fake fitness -- subjective cues of survival and
reproduction, without the real-world effects. Fresh organic fruit
juice costs so much more than nutrition-free soda. Having real friends
is so much more effort than watching Friends on TV. Actually
colonizing the galaxy would be so much harder than pretending to have
done it when filming Star Wars or Serenity.

Fitness-faking technology tends to evolve much faster than our
psychological resistance to it. The printing press is invented; people
read more novels and have fewer kids; only a few curmudgeons lament
this. The Xbox 360 is invented; people would rather play a
high-resolution virtual ape in Peter Jackson's King Kong than be a
perfect-resolution real human. Teens today must find their way through
a carnival of addictively fitness-faking entertainment products: MP3,
DVD, TiVo, XM radio, Verizon cellphones, Spice cable, EverQuest
online, instant messaging, Ecstasy, BC Bud. The traditional staples of
physical, mental, and social development (athletics, homework, dating)
are neglected. The few young people with the self-control to pursue
the meritocratic path often get distracted at the last minute -- the
MIT graduates apply to do computer game design for Electronics Arts,
rather than rocket science for NASA.

Around 1900, most inventions concerned physical reality: cars,
airplanes, zeppelins, electric lights, vacuum cleaners, air
conditioners, bras, zippers. In 2005, most inventions concern virtual
entertainment -- the top 10 patent-recipients are usually IBM,
Matsushita, Canon, Hewlett-Packard, Micron Technology, Samsung, Intel,
Hitachi, Toshiba, and Sony -- not Boeing, Toyota, or Wonderbra. We
have already shifted from a reality economy to a virtual economy, from
physics to psychology as the value-driver and resource-allocator. We
are already disappearing up our own brainstems. Freud's pleasure
principle triumphs over the reality principle. We narrow-cast
human-interest stories to each other, rather than broad-casting
messages of universal peace and progress to other star systems.

Maybe the bright aliens did the same. I suspect that a certain period
of fitness-faking narcissism is inevitable after any intelligent life
evolves. This is the Great Temptation for any technological species --
to shape their subjective reality to provide the cues of survival and
reproductive success without the substance. Most bright alien species
probably go extinct gradually, allocating more time and resources to
their pleasures, and less to their children.

Heritable variation in personality might allow some lineages to resist
the Great Temptation and last longer. Those who persist will evolve
more self-control, conscientiousness, and pragmatism. They will evolve
a horror of virtual entertainment, psychoactive drugs, and
contraception. They will stress the values of hard work, delayed
gratification, child-rearing, and environmental stewardship. They will
combine the family values of the Religious Right with the
sustainability values of the Greenpeace Left.

My dangerous idea-within-an-idea is that this, too, is already
happening. Christian and Muslim fundamentalists, and anti-consumerism
activists, already understand exactly what the Great Temptation is,
and how to avoid it. They insulate themselves from our Creative-Class
dream-worlds and our EverQuest economics. They wait patiently for our
fitness-faking narcissism to go extinct. Those practical-minded
breeders will inherit the earth, as like-minded aliens may have
inherited a few other planets. When they finally achieve Contact, it
will not be a meeting of novel-readers and game-players. It will be a
meeting of dead-serious super-parents who congratulate each other on
surviving not just the Bomb, but the Xbox. They will toast each other
not in a soft-porn Holodeck, but in a sacred nursery.
_________________________________________________________________

ROBERT SHAPIRO
Professor Emeritus, Senior Research Scientist, Department of
Chemistry, New York University. Author, Planetary Dreams
[shapiro100.jpg]

We shall understand the origin of life within the next 5 years
Two very different groups will find this development dangerous, and
for different reasons, but this outcome is best explained at the end
of my discussion.

Just over a half century ago, in the spring of 1953, a famous
experiment brought enthusiasm and renewed interest to this field.
Stanley Miller, mentored by Harold Urey, demonstrated that a mixture
of small organic molecules (monomers) could readily be prepared by
exposing a mixture of simple gases to an electrical spark. Similar
mixtures were found in meteorites, which suggested that organic
monomers may be widely distributed in the universe. If the ingredients
of life could be made so readily, then why could they not just as
easily assort themselves to form cells?

In that same spring, however, another famous paper was published by
James Watson and Francis Crick. They demonstrated that the heredity of
living organisms was stored in a very large large molecule called DNA.
DNA is a polymer, a substance made by stringing many smaller units
together, as links are joined to form a long chain.

The clear connection between the structure of DNA and its biological
function, and the geometrical beauty of the DNA double helix led many
scientists to consider it to be the essence of life itself. One flaw
remained, however, to spoil this picture. DNA could store information,
but it could not reproduce itself without the assistance of proteins,
a different type of polymer. Proteins are also adept at increasing the
rate of (catalyzing) many other chemical reactions that are considered
necessary for life. The origin of life field became mired in the
"chicken-or-the egg" question. Which came first: DNA or proteins? An
apparent answer emerged when it was found that another polymer, RNA (a
cousin of DNA) could manage both heredity and catalysis. In 1986,
Walter Gilbert proposed that life began with an "RNA World." Life
started when an RNA molecule that could copy itself was formed, by
chance, in a pool of its own building blocks.

Unfortunately, a half century of chemical experiments have
demonstrated that nature has no inclination to prepare RNA, or even
the building blocks (nucleotides) that must be linked together to form
RNA. Nucleotides are not formed in Miller-type spark discharges, nor
are they found in meteorites. Skilled chemists have prepared
nucleotides in well-equipped laboratories, and linked them to form
RNA, but neither chemists nor laboratories were present when life
began on the early Earth. The Watson-Crick theory sparked a revolution
in molecular biology, but it left the origin-of-life question at an
impasse.

Fortunately, an alternative solution to this dilemma has gradually
emerged: neither DNA nor RNA nor protein were necessary for the origin
of life. Large molecules dominate the processes of life today, but
they were not needed to get it started. Monomers themselves have the
ability to support heredity and catalysis. The key requirement is that
a suitable energy source be available to assist them in the processes
of self-organization. A demonstration of the principle involved in the
origin of life would require only that a suitable monomer mixture be
exposed to an appropriate energy source in a simple apparatus. We
could then observe the very first steps in evolution.

Some mixtures will work, but many others will fail, for technical
reasons. Some dedicated effort will be needed in the laboratory to
prove this point. Why have I specified five years for this discovery?
The unproductive polymer-based paradigm is far from dead, and
continues to consume the efforts of the majority of workers in the
field. A few years will be needed to entice some of them to explore
the other solution. I estimate that several years more (the time for a
PhD thesis) might be required to identify a suitable monomer-energy
combination, and perform a convincing demonstration.

Who would be disturbed if such efforts should succeed? Many scientists
have been attracted by the RNA World theory because of its elegance
and simplicity. Some of them have devoted decades of their career in
efforts to prove it. They would not be pleased if Freeman Dyson's
description proved to be correct: "life began with little bags, the
precursors of cells, enclosing small volumes of dirty water containing
miscellaneous garbage."

A very different group would find this development as dangerous as the
theory of evolution. Those who advocate creationism and intelligent
design would feel that another pillar of their belief system was under
attack. They have understood the flaws in the RNA World theory, and
used them to support their supernatural explanation for life's origin.
A successful scientific theory in this area would leave one less task
less for God to accomplish: the origin of life would be a natural (and
perhaps frequent) result of the physical laws that govern this
universe. This latter thought falls directly in line with the idea of
Cosmic Evolution, which asserts that events since the Big Bang have
moved almost inevitably in the direction of life. No miracle or
immense stroke of luck was needed to get it started. If this should be
the case, then we should expect to be successful when we search for
life beyond this planet. We are not the only life that inhabits this
universe.
_________________________________________________________________

KAI KRAUSE
Researcher, philosopher, software developer, Author: 3DScience: new
Scanning Electron Microscope imagery
[krause100.jpg]
Anty Gravity: Chaos Theory in an all too practical sense

Dangerous Ideas? It is dangerous ideas you want? From this group of
people ? That in itself ought to be nominated as one of the more
dangerous ideas...

Danger is ubiquitous. If recent years have shown us anything, it
should be that "very simple small events can cause real havoc in our
society". A few hooded youths play cat and mouse with the police:
bang, thousands of burned cars put all of Paris into a complete state
of paralysis, mandatory curfew and the entire system in shock and
horror.

My first thought was: what if any really smart set of people really
set their mind to it...how utterly and scarily trivial it would be, to
disrupt the very fabric of life, to bring society to a dead stop?

The relative innocence and stable period of the last 50 years may
spiral into a nearly inevitable exposure to real chaos. What if it
isn't haphazard testosterone driven riots, where they cannibalize
their own neighborhood, much like in L.A. in the 80s, but someone with
real insight behind that criminal energy ? What if Slashdotters start
musing aloud about "Gee, the L.A. water supply is rather simplistic,
isn't it?" An Open Source crime web, a Wiki for real WTO opposition ?
Hacking L.A. may be a lot easier than hacking IE.

That is basic banter over a beer in a bar, I don't even want to
actually speculate what a serious set of brainiacs could conjure up.
And I refuse to even give it any more print space here. However, the
danger of such sad memes is what requires our attention!

In fact, I will broaden the specter still: its not violent crime and
global terrorism I worry about, as much as the basic underpinning of
our entire civilization coming apart, as such. No acts of malevolence,
no horrible plans by evil dark forces, neither the singular "Bond
Nemesis" kind, nor masses of religious fanatics. None of that
needed... It is the glue that is coming apart to topple this tower.
And no, I am not referring to "spiraling trillions of debt".

No, what I am referring to is a slow process I observed over the last
30 years, ever since in my teens I wondered "How would this world
work, if everyone were like me ?" and realized: it wouldn't !
It was amazing to me that there were just enough people to make just
enough shoes so that everyone can avoid walking barefoot. That there
are people volunteering to spend day-in, day-out, being dentists, and
lawyers and salesmen. Almost any "jobjob" I look at, I have the most
sincere admiration for the tenacity of the people...how do they do it?
It would drive me nuts after hours, let alone years...Who makes those
shoes ?

That was the wondrous introspection in adolescent phases, searching
for a place in the jigsaw puzzle.

But in recent years, the haunting question has come back to me: "How
the hell does this world function at all? And does it, really ? I feel
an alienation zapping through the channels, I can't find myself
connecting with those groups of humanoids trouncing around MTV.
Especially the glimpses of "real life": on daytime-courtroom-dramas or
just looking at faces in the street. On every scale, the closer I
observe it, the more the creeping realization haunts me: individuals,
families, groups, neighborhoods, cities, states, countries... they all
just barely hang in there, between debt and dysfunction. The whole
planet looks like Any town with mini malls cutting up the landscape
and just down the road it's all white trash with rusty car wrecks in
the back yard. A huge Groucho Club I don't want to be a member of.

But it does go further: what is particularly disturbing to see is this
desperate search for Individualism that has rampantly increased in the
last decade or so.

Everyone suddenly needs to be so special, be utterly unique. So unique
that they race off like lemmings to get 'even more individual'
tattoos, branded cattle, with branded chains in every mall, converging
on a blanded sameness world wide, but every rap singer with ever more
gold chains in ever longer stretched limos is singing the tune: Don't
be a loser! Don't be normal! The desperation with which millions of
youngsters try to be that one-in-a-million professional ball player
may have been just a "sad but silly factoid" for a long time.

But now the tables are turning: the anthill is relying on the
behaviour of the ants to function properly. And that implies: the
social behaviour, the role playing, taking defined tasks and follow
them through.

What if each ant suddenly wants to be the queen? What if soldiering
and nest building and cleaning chores is just not cool enough any
more?

If AntTV shows them every day nothing but un-Ant behaviour...?

In my youth we were whining about what to do and how to do it, but in
the end,all of my friends did become "normal" humans, orthopedics and
lawyers, social workers, teachers... There were always a few that
lived on the edges of normality, like ending up as television
celebrities, but on the whole: they were perfectly reasonable ants.
1.8 children, 2.7 cars, 3.3 TVs...

Now: I am no longer confident that line will continue. If every
honeymoon is now booked in Bali on a Visa card, and every kid in
Borneo wants to play ball in NYC... can the network of society be
pliable enough to accommodate total upheaval? And what if 2 billion
Chinese and Indians raise a generation of kids staring 6+ hours a day
into All American values they can never attain... being taunted with
Hollywood movies of heroic acts and pathetic dysfunctionality, coupled
with ever increasing violence and disdain for ethics or morals.

Seeing scenes of desperate youths in South American slums watching
"Kill Bill" makes me think: this is just oxygen thrown into the
fire... The ants will not play along much longer. The anthill will not
survive if even a small fraction of the system is falling apart.

Couple that inane drive for "Super Individualism" (and the Quest for
Coolness by an ever increasing group destined to fail miserably) with
the scarily simple realization of how effective even a small set of
desperate people can become, then add the obvious penchant for
religious fanaticism and you have an ugly picture of the long term
future.

So many curves that grow upwards towards limits, so many statistics
that show increases and no way to turn around.

Many in this forum may speculate about infinite life spans, changing
the speed of light, finding ways to decode consciousness, wormholes to
other dimensions and finding grand unified theories.

To make it clear: I applaud that! "It does take all kinds".
Diversity is indeed one of the definitions of the meaning of life.
Edge IS Applied Diversity.
Those are viable and necessary questions for mankind as a whole,
however: I believe we need to clean house, re-evaluate, redefine the
priorities.

While we look at the horizon here in these pages, it is the very
ground beneath us, that may be crumbling. The ant hill could really go
to ant hell!  Next year, let's ask for good ideas. Really practical,
serious, good ideas. "The most immediate positive global impact of any
kind that can be achieved within one year?". How to envision Internet3
and Web3 as a real platform for a global brainstorming with 6+ billion
potential participants.

This was not meant to sound like doom and gloom naysaying.  I see
myself as a sincere optimist, but one who believes in realistic
pessimism as a useful tool to initiate change.
_________________________________________________________________

CARLO ROVELLI
Professor of Physics, University of the Mediterraneum, Marseille;
Member, Intitut Universitaire de France: Author, Quantum Gravity
[rovelli100.jpg]

What the physics of the 20th century says about the world might in
fact be true

There is a major "dangerous" scientific idea in contemporary physics,
with a potential impact comparable to Copernicus or Darwin. It is the
idea that what the physics of the 20th century says about the world
might in fact be true.

Let me explain. Take quantum mechanics. If taken seriously, it changes
our understanding of reality truly dramatically. For instance, if we
take quantum mechanics seriously, we cannot think that objects have
ever a definite position. They have a positions only when they
interact with something else. And even in this case, they are in that
position only with respect to that "something else": they are still
without position with respect to the rest of the world. This is a
change of image of the world far more dramatic that Copernicus. And
also a change about our possibility of thinking about ourselves far
more far-reaching than Darwin. Still, few people take the quantum
revolution really seriously. The danger is exorcized by saying "well,
quantum mechanics is only relevant for atoms and very small
objects...", or similar other strategies, aimed at not taking the
theory seriously. We still haven't digested that the world is quantum
mechanical, and the immense conceptual revolution needed to make sense
of this basic factual discovery about nature.

Another example: take Einstein's relativity theory. Relativity makes
completely clear that asking "what happens right now on Andromeda?" is
a complete non-sense. There is no right now elsewhere in the universe.
Nevertheless, we keep thinking at the universe as if there was an
immense external clock that ticked away the instants, and we have a
lot of difficulty in adapting to the idea that "the present state of
the universe right now", is a physical non-sense.

In these cases, what we do is to use concepts that we have developed
in our very special environment (characterized by low velocities, low
energy...) and we think the world as if it was all like that. We are
like ants that have grown in a little garden with green grass and
small stones, and cannot think reality differently than made of green
grass and small stones.

I think that seen from 200 years in the future, the dangerous
scientific idea that was around at the beginning of the 20th century,
and that everybody was afraid to accept, will simply be that the world
is completely different from our simple minded picture of it. As the
physics of the 20th century had already shown.

What makes me smile is that even many of todays "audacious scientific
speculations" about things like extra-dimensions, multi-universes, and
the likely, are not only completely unsupported experimentally, but
are even always formulated within world view that, at a close look,
has not yet digested quantum mechanics and relativity!
_________________________________________________________________

RICHARD DAWKINS
Evolutionary Biologist, Charles Simonyi Professor For The
Understanding Of Science, Oxford University; Author, The Ancestor's
Tale
[dawkins100.jpg]

Let's all stop beating Basil's car

Ask people why they support the death penalty or prolonged
incarceration for serious crimes, and the reasons they give will
usually involve retribution. There may be passing mention of
deterrence or rehabilitation, but the surrounding rhetoric gives the
game away. People want to kill a criminal as payback for the horrible
things he did. Or they want to give "satisfaction' to the victims of
the crime or their relatives. An especially warped and disgusting
application of the flawed concept of retribution is Christian
crucifixion as "atonement' for "sin'.

Retribution as a moral principle is incompatible with a scientific
view of human behaviour. As scientists, we believe that human brains,
though they may not work in the same way as man-made computers, are as
surely governed by the laws of physics. When a computer malfunctions,
we do not punish it. We track down the problem and fix it, usually by
replacing a damaged component, either in hardware or software.

Basil Fawlty, British television's hotelier from hell created by the
immortal John Cleese, was at the end of his tether when his car broke
down and wouldn't start. He gave it fair warning, counted to three,
gave it one more chance, and then acted. "Right! I warned you. You've
had this coming to you!" He got out of the car, seized a tree branch
and set about thrashing the car within an inch of its life. Of course
we laugh at his irrationality. Instead of beating the car, we would
investigate the problem. Is the carburettor flooded? Are the sparking
plugs or distributor points damp? Has it simply run out of gas? Why do
we not react in the same way to a defective man: a murderer, say, or a
rapist? Why don't we laugh at a judge who punishes a criminal, just as
heartily as we laugh at Basil Fawlty? Or at King Xerxes who, in 480
BC, sentenced the rough sea to 300 lashes for wrecking his bridge of
ships? Isn't the murderer or the rapist just a machine with a
defective component? Or a defective upbringing? Defective education?
Defective genes?

Concepts like blame and responsibility are bandied about freely where
human wrongdoers are concerned. When a child robs an old lady, should
we blame the child himself or his parents? Or his school? Negligent
social workers? In a court of law, feeble-mindedness is an accepted
defence, as is insanity. Diminished responsibility is argued by the
defence lawyer, who may also try to absolve his client of blame by
pointing to his unhappy childhood, abuse by his father, or even
unpropitious genes (not, so far as I am aware, unpropitious planetary
conjunctions, though it wouldn't surprise me).

But doesn't a truly scientific, mechanistic view of the nervous system
make nonsense of the very idea of responsibility, whether diminished
or not? Any crime, however heinous, is in principle to be blamed on
antecedent conditions acting through the accused's physiology,
heredity and environment. Don't judicial hearings to decide questions
of blame or diminished responsibility make as little sense for a
faulty man as for a Fawlty car?

Why is it that we humans find it almost impossible to accept such
conclusions? Why do we vent such visceral hatred on child murderers,
or on thuggish vandals, when we should simply regard them as faulty
units that need fixing or replacing? Presumably because mental
constructs like blame and responsibility, indeed evil and good, are
built into our brains by millennia of Darwinian evolution. Assigning
blame and responsibility is an aspect of the useful fiction of
intentional agents that we construct in our brains as a means of
short-cutting a truer analysis of what is going on in the world in
which we have to live. My dangerous idea is that we shall eventually
grow out of all this and even learn to laugh at it, just as we laugh
at Basil Fawlty when he beats his car. But I fear it is unlikely that
I shall ever reach that level of enlightenment.
_________________________________________________________________

SETH LLOYD
Quantum Mechanical Engineer, MIT
[lloyd100.jpg]

The genetic breakthrough that made people capable of ideas themselves

The most dangerous idea is the genetic breakthrough that made people
capable of ideas themselves. The idea of ideas is nice enough in
principle; and ideas certainly have had their impact for good. But one
of these days one of those nice ideas is likely to have the unintended
consequence of destroying everything we know.
Meanwhile, we cannot not stop creating and exploring new ideas: the
genie of ingenuity is out of the bottle. To suppress the power of
ideas will hasten catastrophe, not avert it. Rather, we must wield
that power with the respect it deserves.
Who risks no danger reaps no reward.
_________________________________________________________________

CAROLYN PORCO
Planetary Scientist; Cassini Imaging Science Team Leader; Director
CICLOPS, Boulder CO; Adjunct Professor, University of Colorado,
University of Arizona
[porco100.jpg]

The Greatest Story Ever Told

The confrontation between science and formal religion will come to an
end when the role played by science in the lives of all people is the
same played by religion today.

And just what is that?

At the heart of every scientific inquiry is a deep spiritual quest --
to grasp, to know, to feel connected through an understanding of the
secrets of the natural world, to have a sense of one's part in the
greater whole. It is this inchoate desire for connection to something
greater and immortal, the need for elucidation of the meaning of the
'self', that motivates the religious to belief in a higher
'intelligence'. It is the allure of a bigger agency -- outside the
self but also involving, protecting, and celebrating the purpose of
the self -- that is the great attractor. Every culture has religion.
It undoubtedly satisfies a manifest human need.

But the same spiritual fulfillment and connection can be found in the
revelations of science. From energy to matter, from fundamental
particles to DNA, from microbes to Homo sapiens, from the singularity
of the Big Bang to the immensity of the universe .... ours is the
greatest story ever told. We scientists have the drama, the plot, the
icons, the spectacles, the 'miracles', the magnificence, and even the
special effects. We inspire awe. We evoke wonder.
And we don't have one god, we have many of them. We find gods in the
nucleus of every atom, in the structure of space/time, in the
counter-intuitive mechanisms of electromagneticsm. What richness! What
consummate beauty!

We even exalt the `self'. Our script requires a broadening of the
usual definition, but we too offer hope for everlasting existence. The
`self' that is the particular, networked set of connections of the
matter comprising our mortal bodies will one day die, of course. But
the `self' that is the sum of each separate individual condensate in
us of energy-turned-matter is already ancient and will live forever.
Each fundamental particle may one day return to energy, or from there
revert back to matter. But in one form or another, it will not cease.
In this sense, we and all around us are eternal, immortal, and
profoundly connected. We don't have one soul; we have trillions upon
trillions of them.
These are reasons enough for jubilation ... for riotous, unrestrained,
exuberant merry-making.

So what are we missing?

Ceremony.

We lack ceremony. We lack ritual. We lack the initiation of baptism,
the brotherhood of communal worship.

We have no loving ministers, guiding and teaching the flocks in the
ways of the 'gods'. We have no fervent missionaries, no loyal
apostles. And we lack the all-inclusive ecumenical embrace, the
extended invitation to the unwashed masses. Alienation does not warm
the heart; communion does.

But what if? What if we appropriated the craft, the artistry, the
methods of formal religion to get the message across? Imagine
'Einstein's Witnesses' going door to door or TV evangelists
passionately espousing the beauty of evolution.

Imagine a Church of Latter Day Scientists where believers could
gather. Imagine congregations raising their voices in tribute to
gravity, the force that binds us all to the Earth, and the Earth to
the Sun, and the Sun to the Milky Way. Or others rejoicing in the
nuclear force that makes possible the sunlight of our star and the
starlight of distant suns. And can't you just hear the hymns sung to
the antiquity of the universe, its abiding laws, and the heaven above
that 'we' will all one day inhabit, together, commingled, spread out
like a nebula against a diamond sky?

One day, the sites we hold most sacred just might be the astronomical
observatories, the particle accelerators, the university research
installations, and other laboratories where the high priests of
science -- the biologists, the physicists, the astronomers, the
chemists -- engage in the noble pursuit of uncovering the workings of
nature herself. And today's museums, expositional halls, and
planetaria may then become tomorrow's houses of worship, where these
revealed truths, and the wonder of our interconnectedness with the
cosmos, are glorified in song by the devout and the soulful.

"Hallelujah!", they will sing. "May the force be with you!"
_________________________________________________________________

MICHAEL NESMITH
Artist, writer; Former cast member of "The Monkees"; A Trustee and
President of the Gihon Foundation and a Trustee and Vice-Chair of the
American Film Institute
[nez100.jpg]

Existence is Non-Time, Non-Sequential, and Non-Objective

Not a dangerous idea per se but like a razor sharp tool in unskilled
hands it can inflect unintended damage.

Non-Time drives forward the notion the past does not create the
present. This would of course render evolutionary theory a
local-system, near-field process that was non-causative (i.e. effect).

Non-Sequential reverberates through the Turing machine and
computation, and points to simultaneity. It redefines language and
cognition.

Non-Objective establishes a continuum not to be confused with
solipsism. As Schrödinger puts it when discussing the "time-hallowed
discrimination between subject and object" -- "the world is given to
me only once, not one existing and one perceived. Subject and object
are only one. The barrier between them cannot be said to have broken
down as a result of recent experience in the physical sciences, for
this barrier does not exist". This continuum has large implications
for the empirical data set, as it introduces factual infinity into the
data plane.

These three notions, Non-Time, Non-sequence, and Non-Object have been
peeking like diamonds through the dust of empiricism, philosophy, and
the sciences for centuries. Quantum mechanics, including Deutsch's
parallel universes and the massive parallelism of quantum computing,
is our brightest star -- an unimaginably tall peak on our fitness
landscape.

They bring us to a threshold over which empiricism has yet to travel,
through which philosophy must reconstruct the very idea of ideas, and
beyond which stretches the now familiar "uncharted territories" of all
great adventures.
_________________________________________________________________

LAWRENCE KRAUSS
Physicist/Cosmologist, Case Western Reserve University; Author, Hiding
in the Mirror
[krauss100.jpg]

The world may fundamentally be inexplicable

Science has progressed for 400 years by ultimately explaining observed
phenomena in terms of fundamental theories that are rigid. Even minor
deviations from predicted behavior are not allowed by the theory, so
that if such deviations are observed, these provide evidence that the
theory must be modified, usually being replaced by a yet more
comprehensive theory that fixes a wider range of phenomena.

The ultimate goal of physics, as it is often described, is to have a
"theory of everything", in which all the fundamental laws that
describe nature can neatly be written down on the front of a T-shirt
(even if the T-shirt can only exist in 10 dimensions!). However, with
the recognition that the dominant energy in the universe resides in
empty space -- something that is so peculiar that it appears very
difficult to understand within the context of any  theoretical ideas
we now possess -- more physicists have been exploring the idea that
perhaps physics is an 'environmental  science', that the laws of
physics we observe are merely accidents of our circumstances, and
that an infinite number of different universe could exist with
different laws of physics.

This is true even if there does exist some fundamental candidate
mathematical physical theory. For example, as is currently in vogue in
an idea related to string  theory, perhaps the fundamental theory
allows an infinite number of different 'ground state' solutions, each
of which describes a different possible universe with a consistent set
of physical laws and physical dimensions.

It might be that the only way to understand why the laws of nature we
observe in our universe are the way they are is to understand that if
they were any different, then  life could not have arisen in our
universe, and we would thus not be here to measure them today.

This is one version of the infamous "anthropic principle". But it
could actually be worse -- it is equally likely that many different
combinations of laws would allow life to form, and that it is a pure
accident that the constants of nature result in the combinations we
experience in our universe. Or, it could be that the mathematical
formalism is actually so complex so that the ground states of the
theory, i.e. the set of possible states that might describe our
universe, actually might not  be determinable.

In this case, the end of "fundamental" theoretical physics (i.e. the
search for fundamental microphysical laws...there will still be lots
of work for physicists who try to understand the host of complex
phenomena occurring at a variety of larger scales) might occur not via
a theory of everything, but rather with the recognition that all
so-called fundamental theories that might describe nature would be
purely "phenomenological", that is, they would be derivable from
observational phenomena, but would not reflect any underlying grand
mathematical structure of the universe  that would allow a basic
understanding of why the universe is the way it is.
_________________________________________________________________

DANIEL C. DENNETT
Philosopher; University Professor, Co-Director, Center for Cognitive
Studies, Tufts University; Author, Darwin's Dangerous Idea
[dennett101.jpg]

There aren't enough minds to house the population explosion of memes

Ideas can be dangerous. Darwin had one, for instance. We hold all
sorts of inventors and other innovators responsible for assaying, in
advance, the environmental impact of their creations, and since ideas
can have huge environmental impacts, I see no reason to exempt us
thinkers from the responsibility of quarantining any deadly ideas we
may happen to come across. So if I found what I took to be such a
dangerous idea, I would button my lip until I could find some way of
preparing the ground for its safe expression. I expect that others who
are replying to this year's Edge question have engaged in similar
reflections and arrived at the same policy. If so, then some people
may be pulling their punches with their replies. The really dangerous
ideas they are keeping to themselves.

But here is an unsettling idea that is bound to be true in one version
or another, and so far as I can see, it won't hurt to publicize it
more. It might well help.

The human population is still growing, but at nowhere near the rate
that the population of memes is growing. There is competition for the
limited space in human brains for memes, and something has to give.
Thanks to our incessant and often technically brilliant efforts, and
our apparently insatiable appetites for novelty, we have created an
explosively growing flood of information, in all media, on all topics,
in every genre. Now either (1) we will drown in this flood of
information, or (2) we won't drown in it. Both alternatives are deeply
disturbing. What do I mean by drowning? I mean that we will become
psychologically overwhelmed, unable to cope, victimized by the glut
and unable to make life-enhancing decisions in the face of an
unimaginable surfeit. (I recall the brilliant scene in the film of
Evelyn Waugh's dark comedy The Loved One in which embalmer Mr.
Joyboy's gluttonous mother is found sprawled on the kitchen floor,
helplessly wallowing in the bounty that has spilled from a capsized
refrigerator.) We will be lost in the maze, preyed upon by whatever
clever forces find ways of pumping money-or simply further memetic
replications-out of our situation. (In The War of the Worlds, H. G.
Wells sees that it might well be our germs, not our high-tech military
contraptions, that subdue our alien invaders. Similarly, might our own
minds succumb not to the devious manipulations of evil brainwashers
and propagandists, but to nothing more than a swarm of irresistible
ditties, Nofs nibbled to death by slogans and one-liners?)

If we don't drown, how will we cope?  If we somehow learn to swim in
the rising tide of the infosphere, that will entail that we-that is to
say, our grandchildren and their grandchildren-become very very
different from our recent ancestors. What will "we"  be like?  (Some
years ago, Doug Hofstadter wrote a wonderful piece, " In 2093, Just
Who Will Be We?" in which he imagines robots being created to have
"human" values, robots that gradually take over the social roles of
our biological descendants, who become stupider and less concerned
with the things we value. If we could secure the welfare of just one
of these groups, our children or our brainchildren, which group would
we care about the most, with which group would we identify?)
Whether "we" are mammals or robots in the not so distant future, what
will we know and what will we have forgotten forever, as our
previously shared intentional objects recede in the churning wake of
the great ship that floats on this sea and charges into the future
propelled by jets of newly packaged information?What will happen to
our cultural landmarks?  Presumably our descendants will all still
recognize a few reference points (the pyramids of Egypt, arithmetic,
the Bible, Paris, Shakespeare, Einstein, Bach . . . ) but as wave
after wave of novelty passes over them, what will they lose sight of?
The Beatles are truly wonderful, but if their cultural immortality is
to be purchased by the loss of such minor 20th century figures as
Billie Holiday, Igor Stravinsky, and Georges Brassens [who he?], what
will remain of our shared understanding?

The intergenerational mismatches that we all experience in macroscopic
versions (great-grandpa's joke falls on deaf ears, because nobody else
in the room knows that Nixon's wife was named "Pat") will presumably
be multiplied to the point where much of the raw information that we
have piled in our digital storehouses is simply incomprehensible to
everyone-except that we will have created phalanxes of "smart"
Rosetta-stones of one sort or another that can "translate" the alien
material into something we (think maybe we) understand. I suspect we
hugely underestimate the importance (to our sense of cognitive
security) of our regular participation in the four-dimensional human
fabric of mutual understanding, with its reassuring moments of
shared-and seen to be shared, and seen to be seen to be
shared-comprehension.

What will happen to common knowledge in the future?  I do think our
ancestors had it easy: aside from all the juicy bits of unshared
gossip and some proprietary trade secrets and the like, people all
knew pretty much the same things, and knew that they knew the same
things. There just wasn't that much to know.  Won't people be able to
create and exploit illusions of common knowledge in the future,
virtual worlds in which people only think they are in touch with their
cyber-neighbors?

I see small-scale projects that might protect us to some degree, if
they are done wisely. Think of all the work published in academic
journals before, say, 1990 that is in danger of becoming practically
invisible to later researchers because it can't be found on-line with
a good search engine. Just scanning it all and hence making it
"available" is not the solution. There is too much of it. But we could
start projects in which (virtual) communities of retired  researchers
who still have their wits about them and who know particular
literatures well could brainstorm amongst themselves, using their
pooled experience to elevate the forgotten gems, rendering them
accessible to the next generation of researchers. This sort of
activity has in the past been seen to be a  stodgy sort of
scholarship, fine for classicists and historians, but not fit work for
cutting-edge scientists and the like. I think we should try to shift
this imagery and help people recognize the importance of providing for
each other this sort of pathfinding through the forests of
information. It's a drop in the bucket, but perhaps if we all start
thinking about conservation of valuable mind-space, we can save
ourselves (our descendants) from informational collapse.
_________________________________________________________________

DANIEL GILBERT
Psychologist, Harvard University
[gilbert100.jpg]

The idea that ideas can be dangerous

Dangerous does not mean exciting or bold. It means likely to cause
great harm. The most dangerous idea is the only dangerous idea: The
idea that ideas can be dangerous.

We live in a world in which people are beheaded, imprisoned, demoted,
and censured simply because they have opened their mouths, flapped
their lips, and vibrated some air. Yes, those vibrations can make us
feel sad or stupid or alienated. Tough shit. That's the price of
admission to the marketplace of ideas. Hateful, blasphemous,
prejudiced, vulgar, rude, or ignorant remarks are the music of a free
society, and the relentless patter of idiots is how we know we're in
one. When all the words in our public conversation are fair, good, and
true, it's time to make a run for the fence.
_________________________________________________________________

ANDY CLARK
School of Philosophy, Psychology and Language Sciences, Edinburgh
University
[clark100.jpg]

The quick-thinking zombies inside us

So much of what we do, feel, think and choose is determined by
non-conscious, automatic uptake of cues and information.
Of course, advertisers will say they have known this all along. But
only in recent years, with seminal studies by Tanya Chartrand, John
Bargh and others has the true scale of our daily automatism really
begun to emerge. Such studies show that it is possible (it is
relatively easy) to activate racist stereotypes that impact our
subsequent behavioral interactions, for example yielding the judgment
that your partner in a subsequent game or task is more hostile than
would be judged by an unprimed control. Such effects occur despite a
subject's total and honest disavowal of those very stereotypes. In
similar ways it is possible to unconsciously prime us to feel older
(and then we walk more slowly).

In my favorite recent study, experimenters manipulate cues so that the
subject forms an unconscious goal, whose (unnoticed) frustration makes
them lose confidence and perform worse at a subsequent task! The
dangerous truth, it seems to me, is that these are not isolated little
laboratory events. Instead, they reveal the massed woven fabric of our
day-to-day existence. The underlying mechanisms at work impart an
automatic drive towards the automation of all manner of choices and
actions, and don't discriminate between the 'trivial' and the
portentous.

It now seems clear that many of my major life and work decisions are
made very rapidly, often on the basis of ecologically sound but
superficial cues, with slow deliberative reason busily engaged in
justifying what the quick-thinking zombies inside me have already laid
on the table. The good news is that without these mechanisms we'd be
unable to engage in fluid daily life or reason at all, and that very
often they are right. The dangerous truth, though, is that we are
indeed designed to cut conscious, aware choice out of the picture
wherever possible. This is not an issue about free will, but simply
about the extent to which conscious deliberation cranks the engine of
behavior. Crank it it does: but not in anything like the way, or
extent, we may have thought. We'd better get to grips with this before
someone else does.
_________________________________________________________________

SHERRY TURKLE
Psychologist, MIT; Author, Life on the Screen: Identity in the Age of
the Internet
[turkle100.jpg]

After several generations of living in the computer culture,
simulation will become fully naturalized. Authenticity in the
traditional sense loses its value, a vestige of another time.

Consider this moment from 2005: I take my fourteen-year-old daughter
to the Darwin exhibit at the American Museum of Natural History. The
exhibit documents Darwin's life and thought, and with a somewhat
defensive tone (in light of current challenges to evolution by
proponents of intelligent design), presents the theory of evolution as
the central truth that underpins contemporary biology. The Darwin
exhibit wants to convince and it wants to please. At the entrance to
the exhibit is a turtle from the Galapagos Islands, a seminal object
in the development of evolutionary theory. The turtle rests in its
cage, utterly still. "They could have used a robot," comments my
daughter. It was a shame to bring the turtle all this way and put it
in a cage for a performance that draws so little on the turtle's
"aliveness. " I am startled by her comments, both solicitous of the
imprisoned turtle because it is alive and unconcerned by its
authenticity. The museum has been advertising these turtles as
wonders, curiosities, marvels -- among the plastic models of life at
the museum, here is the life that Darwin saw. I begin to talk with
others at the exhibit, parents and children. It is Thanksgiving
weekend. The line is long, the crowd frozen in place. My question, "Do
you care that the turtle is alive?" is welcome diversion. A ten year
old girl would prefer a robot turtle because aliveness comes with
aesthetic inconvenience: "It's water looks dirty. Gross. " More
usually, the votes for the robots echo my daughter's sentiment that in
this setting, aliveness doesn't seem worth the trouble. A
twelve-year-old girl opines: "For what the turtles do, you didn't have
to have the live ones. " Her father looks at her, uncomprehending:
"But the point is that they are real, that's the whole point. "

The Darwin exhibit is about authenticity: on display are the actual
magnifying glass that Darwin used, the actual notebooks in which he
recorded his observations, indeed, the very notebook in which he wrote
the famous sentences that first described his theory of evolution But
in the children's reactions to the inert but alive Galapagos turtle,
the idea of the "original" is in crisis.

I have long believed that in the culture of simulation, the notion of
authenticity is for us what sex was to the Victorians -- "threat and
obsession, taboo and fascination. " I have lived with this idea for
many years, yet at the museum, I find the children's position
startling, strangely unsettling. For these children, in this context,
aliveness seems to have no intrinsic value. Rather, it is useful only
if needed for a specific purpose. "If you put in a robot instead of
the live turtle, do you think people should be told that the turtle is
not alive?" I ask. Not really, say several of the children. Data on
"aliveness" can be shared on a "need to know" basis, for a purpose.
But what are the purposes of living things? When do we need to know if
something is alive?

Consider another vignette from 2005: an elderly woman in a nursing
home outside of Boston is sad. Her son has broken off his relationship
with her. Her nursing home is part of a study I am conducting on
robotics for the elderly. I am recording her reactions as she sits
with the robot Paro, a seal-like creature, advertised as the first
"therapeutic robot" for its ostensibly positive effects on the ill,
the elderly, and the emotionally troubled. Paro is able to make eye
contact through sensing the direction of a human voice, is sensitive
to touch, and has "states of mind" that are affected by how it is
treated, for example, is it stroked gently or with agressivity? In
this session with Paro, the woman, depressed because of her son's
abandonment, comes to believe that the robot is depressed as well. She
turns to Paro, strokes him and says: "Yes, you're sad, aren't you.
It's tough out there. Yes, it's hard. " And then she pets the robot
once again, attempting to provide it with comfort. And in so doing,
she tries to comfort herself.

The woman's sense of being understood is based on the ability of
computational objects like Paro to convince their users that they are
in a relationship. I call these creatures (some virtual, some physical
robots) "relational artifacts. " Their ability to inspire relationship
is not based on their intelligence or consciousness, but on their
ability to push certain "Darwinian" buttons in people (making eye
contact, for example) that make people respond as though they were in
relationship. For me, relational artifacts are the new uncanny in our
computer culture -- as Freud once put it, the long familiar taking a
form that is strangely unfamiliar. As such, they confront us with new
questions.

What does this deployment of "nurturing technology" at the two most
dependent moments of the life cycle say about us? What will it do to
us? Do plans to provide relational robots to attend to children and
the elderly make us less likely to look for other solutions for their
care? People come to feel love for their robots, but if our experience
with relational artifacts is based on a fundamentally deceitful
interchange, can it be good for us? Or might it be good for us in the
"feel good" sense, but bad for us in our lives as moral beings?

Relationships with robots bring us back to Darwin and his dangerous
idea: the challenge to human uniqueness. When we see children and the
elderly exchanging tendernesses with robotic pets the most important
question is not whether children will love their robotic pets more
than their real life pets or even their parents, but rather, what will
loving come to mean?
_________________________________________________________________

STEVEN STROGATZ
Applied mathematician, Cornell University; Author, Sync
[strogatz100.jpg]

The End of Insight

I worry that insight is becoming impossible, at least at the frontiers
of mathematics. Even when we're able to figure out what's true or
false, we're less and less able to understand why.

An argument along these lines was recently given by Brian Davies in
the "Notices of the American Mathematical Society". He mentions, for
example, that the four-color map theorem in topology was proven in
1976 with the help of computers, which exhaustively checked a huge but
finite number of possibilities. No human mathematician could ever
verify all the intermediate steps in this brutal proof, and even if
someone claimed to, should we trust them? To this day, no one has come
up with a more elegant, insightful proof. So we're left in the
unsettling position of knowing that the four-color theorem is true but
still not knowing why.

Similarly important but unsatisfying proofs have appeared in group
theory (in the classification of finite simple groups, roughly akin to
the periodic table for chemical elements) and in geometry (in the
problem of how to pack spheres so that they fill space most
efficiently, a puzzle that goes back to Kepler in the 1500's and that
arises today in coding theory for telecommunications).

In my own field of complex systems theory, Stephen Wolfram has
emphasized that there are simple computer programs, known as cellular
automata, whose dynamics can be so inscrutable that there's no way to
predict how they'll behave; the best you can do is simulate them on
the computer, sit back, and watch how they unfold. Observation
replaces insight. Mathematics becomes a spectator sport.

If this is happening in mathematics, the supposed pinnacle of human
reasoning, it seems likely to afflict us in science too, first in
physics and later in biology and the social sciences (where we're not
even sure what's true, let alone why).

When the End of Insight comes, the nature of explanation in science
will change forever. We'll be stuck in an age of authoritarianism,
except it'll no longer be coming from politics or religious dogma, but
from science itself.
_________________________________________________________________

TERRENCE SEJNOWSKI
Computational Neuroscientist, Howard Hughes Medical Institute;
Coauthor, The Computational Brain
[sejnowski101.jpg]

When will the Internet become aware of itself?

I never thought that I would become omniscient during my lifetime, but
as Google continues to improve and online information continues to
expand I have achieved omniscience for all practical purposes. The
Internet has created a global marketplace for ideas and products,
making it possible for individuals in the far corners of the world to
automatically connect directly to each other. The Internet has
achieved these capabilities by growing exponentially in total
communications bandwidth. How does the communications power of the
Internet compare with that of the cerebral cortex, the most
interconnected part of our brains?

Cortical connections are expensive because they take up volume and
cost energy to send information in the form of spikes along axons.
About 44% of the cortical volume in humans is taken up with long-range
connections, called the white matter. Interestingly, the thickness of
gray matter, just a few millimeters, is nearly constant in mammals
that range in brain volume over five orders of magnitude, and the
volume of the white matter scales approximately as the 4/3 power of
the volume of the gray matter. The larger the brain, the larger the
fraction of resources devoted to communications compared to
computation.

However, the global connectivity in the cerebral cortex is extremely
sparse: The probability of any two cortical neurons having a direct
connection is around one in a hundred for neurons in a vertical column
1 mm in diameter, but only one in a million for more distant neurons.
Thus, only a small fraction of the computation that occurs locally
can be reported to other areas, through a small fraction of the cells
that connect distant cortical areas.

Despite the sparseness of cortical connectivity, the potential
bandwidth of all of the neurons in the human cortex is approximately a
terabit per second, comparable to the total world backbone capacity of
the Internet. However, this capacity is never achieved by the brain in
practice because only a fraction of cortical neurons have a high rate
of firing at any given time. Recent work by Simon Laughlin suggests
that another physical constraint -- energy--limits the brain's ability
to harness its potential bandwidth.

The cerebral cortex also has a massive amount of memory. There are
approximately one billion synapses between neurons under every square
millimeter of cortex, or about one hundred million million synapses
overall. Assuming around a byte of storage capacity at each synapse
(including dynamic as well as static properties), this comes to a
total of 10^15 bits of storage. This is comparable to the amount of
data on the entire Internet; Google can store this in terabyte disk
arrays and has hundreds of thousands of computers simultaneously
sifting through it.

Thus, the internet and our ability to search it are within reach of
the limits of the raw storage and communications capacity of the human
brain, and should exceed it by 2015.

Leo van Hemmen and I recently asked 23 neuroscientists to think about
what we don't yet know about the brain, and to propose a question so
fundamental and so difficult that it could take a century to solve,
following in the tradition of Hilbert's 23 problems in mathematics.
Christof Koch and Francis Crick speculated that the key to
understanding consciousness was global communication:  How do neurons
in the diverse parts of the brain manage to coordinate despite the
limited connectivity?  Sometimes, the communication gets crossed, and
V. S. Ramachandran and Edward Hubbard asked whether synesthetes, rare
individuals who experience crossover in sensory perception such as
hearing colors, seeing sounds, and tasting tactile sensations, might
give us clues to how the brain evolved.

There is growing evidence that the flow of information between parts
of the cortex is regulated by the degree of synchrony of the spikes
within populations of cells that represent perceptual states. Robert
Desimone and his colleagues have examined the effects of attention on
cortical neurons in awake, behaving monkeys and found the coherence
between the spikes of single neurons in the visual cortex and local
field potentials in the gamma band, 30-80 Hz, increased when the
covert attention of a monkey was directed toward a stimulus in the
receptive field of the neuron. The coherence also selectively
increased when a monkey searched for a target with a cued color or
shape amidst a large number of distracters. The increase in coherence
means that neurons representing the stimuli with the cued feature
would have greater impact on target neurons, making them more salient.

The link between attention and spike-field coherence raises a number
of interesting questions. How does top-down input from the prefrontal
cortex regulate the coherence of neurons in other parts of the cortex
through feedback connections? How is the rapidity of the shifts in
coherence achieved?  Experiments on neurons in cortical slices suggest
that inhibitory interneurons are connected to each other in networks
and are responsible for gamma oscillations. Researchers in my
laboratory have used computational models to show that excitatory
inputs can rapidly synchronize a subset of the inhibitory neurons that
are in competition with other inhibitory networks.  Inhibitory
neurons, long thought to merely block activity, are highly effective
in synchronizing neurons in a local column already firing in response
to a stimulus.

The oscillatory activity that is thought to synchronize neurons in
different parts of the cortex occurs in brief bursts, typically
lasting for only a few hundred milliseconds. Thus, it is possible that
there is a packet structure for long-distance communication in the
cortex, similar to the packets that are used to communicate on the
Internet, though with quite different protocols. The first electrical
signals recorded from the brain in 1875 by Richard Caton were
oscillatory signals that changed in amplitude and frequency with the
state of alertness. The function of these oscillations remains a
mystery, but it would be remarkable if it were to be discovered that
these signals held the secrets to the brain's global communications
network.

Since its inception in 1969, the Internet has been scaled up to a size
not even imagined by its inventors, in contrast to most engineered
systems, which fall apart when they are pushed beyond their design
limits. In part, the Internet achieves this scalability because it has
the ability to regulate itself, deciding on the best routes to send
packets depending on traffic conditions. Like the brain, the Internet
has circadian rhythms that follow the sun as the planet rotates under
it. The growth of the Internet over the last several decades more
closely resembles biological evolution than engineering.

How would we know if the Internet were to become aware of itself?  The
problem is that we don't even know if some of our fellow creatures on
this planet are self aware. For all we know the Internet is already
aware of itself.
_________________________________________________________________

LYNN MARGULIS
Biologist, University of Massachusetts, Amherst; Coauthor (with Dorion
Sagan), Acquiring Genomes: A Theory of the Origins of Species
[margulis100.jpg]
Bacteria are us

What is my dangerous idea? Although arcane, evidence for this
dangerous concept is overwhelming; I have collected clues from many
sources. Reminiscent of Oscar Wilde's claim that "even true things can
be proved" I predict that the scientific gatekeepers in academia
eventually will be forced to permit this dangerous idea to become
widely accepted. What is it?

Our sensibilities, our perceptions that register through our sense
organ cells evolved directly from our bacterial ancestors. Signals in
the environment: light impinging on the eye's retina, taste on the
buds of the tongue, odor through the nose, sound in the ear are
translated to nervous impulses by extensions of sensory cells called
cilia. We, like all other mammals, including our apish brothers, have
taste-bud cilia, inner ear cilia, nasal passage cilia that detect
odors. We distinguish savory from sweet, birdsong from whalesong,
drumbeats from thunder. With our eyes closed, we detect the light of
the rising sun and and feel the vibrations of the drums. These
abilities to sense our surroundings, a heritage that preceded the
evolution of all primates, indeed, all animals, by use of specialized
cilia at the tips of sensory cells, and the existence of the cilia in
the tails of sperm, come from one kind of our bacterial ancestors.
Which? Those of our bacterial ancestors that became cilia. We owe our
sensitivity to a loving touch, the scent of lavender , the taste of a
salted nut or vinaigrette, a police-cruiser siren, or glimpse of
brilliant starlight to our sensory cells. We owe the chemical
attraction of the sperm as its tail impels it to swim toward the egg,
even the moss plant sperm, to its cilia. The dangerous idea is that
the cilia evolved from hyperactive bacteria. Bacterial ancestors swam
toward food and away from noxious gases, they moved up to the well-lit
waters at the surface of the pond. They were startled when, in a
crowd, some relative bumped them. These bacterial ancestors that never
slept, avoided water too hot or too salty. They still do.

Why is the concept that our sensitivities evolved directly from
swimming bacterial ancestors of the sensory cilia so dangerous?

Several reasons: we would be forced to admit that bacteria are
conscious, that they are sensitive to stimuli in their environment and
behave accordingly. We would have to accept that bacteria, touted to
be our enemies, are not merely neutral or friendly but that they are
us. They are direct ancestors of our most sensitive body parts. Our
culture's terminology about bacteria is that of warfare: they are
germs to be destroyed and forever vanquished, bacterial enemies make
toxins that poison us. We load our soaps with antibacterials that kill
on contact, stomach ulcers are now agreed to be caused by bacterial
infection. Even if some admit the existence of "good" bacteria in soil
or probiotic food like yogurt few of us tolerate the dangerous notion
that human sperm tails and sensitive cells of nasal passages lined
with waving cilia, are former bacteria. If this dangerous idea becomes
widespread it follows that we humans must agree that even before our
evolution as animals we have hated and tried to kill our own
ancestors. Again, we have seen the enemy, indeed, and, as usual, it is
us. Social interactions of sensitive bacteria, then, not God, made us
who were are today.
_________________________________________________________________

THOMAS METZINGER
Frankfurt Institute for Advanced Studies; Johannes
Gutenberg-Universität Mainz; President German Cognitive Science
Society; Author: Being No One
[metzinger100.jpg]

The Forbidden Fruit Intuition

We all would like to believe that, ultimately, intellectual honesty is
not only an expression of, but also good for your mental health. My
dangerous question is if one can be intellectually honest about the
issue of free will and preserve one's mental health at the same time.
Behind this question lies what I call the "Forbidden Fruit Intuition":
Is there a set of questions which are dangerous not on grounds of
ideology or political correctness, but because the most obvious
answers to them could ultimately make our conscious self-models
disintegrate? Can one really believe in determinism without going
insane?

For middle-sized objects at 37° like the human brain and the human
body, determinism is obviously true. The next state of the physical
universe is always determined by the previous state. And given a
certain brain-state plus an environment you could never have acted
otherwise -- a surprisingly large majority of experts in the free-will
debate today accept this obvious fact. Although your future is open,
this probably also means that for every single future thought you will
have and for every single decision you will make, it is true that it
was determined by your previous brain state.

As a scientifically well-informed person you believe in this theory,
you endorse it. As an open-minded person you find that you are also
interested in modern philosophy of mind, and you might hear a story
much like the following one. Yes, you are a physically determined
system. But this is not a big problem, because, under certain
conditions, we may still continue to say that you are "free": all that
matters is that your actions are caused by the right kinds of brain
processes and that they originate in you. A physically determined
system can well be sensitive to reasons and to rational arguments, to
moral considerations, to questions of value and ethics, as long as all
of this is appropriately wired into its brain. You can be rational,
and you can be moral, as long as your brain is physically determined
in the right way. You like this basic idea: physical determinism is
compatible with being a free agent. You endorse a materialist
philosophy of freedom as well. An intellectually honest person open to
empirical data, you simply believe that something along these lines
must be true.

Now you try to feel that it is true. You try to consciously experience
the fact that at any given moment of your life, you could not have
acted otherwise. You try to experience the fact that even your
thoughts, however rational and moral, are predetermined -- by
something unconscious, by something you can not see. And in doing so,
you start fooling around with the conscious self-model Mother Nature
evolved for you with so much care and precision over millions of
years: You are scratching at the user-surface of your own brain,
tweaking the mouse-pointer, introspectively trying to penetrate into
the operating system, attempting to make the invisible visible. You
are challenging the integrity of your phenomenal self by trying to
integrate your new beliefs, the neuroscientific image of man, with
your most intimate, inner way of experiencing yourself. How does it
feel?

I think that the irritation and deep sense of resentment surrounding
public debates on the freedom of the will actually has nothing much to
do with the actual options on the table. It has to do with the --
perfectly sensible -- intuition that our presently obvious answer will
not only be emotionally disturbing, but ultimately impossible to
integrate into our conscious self-models.

Or our societies: The robust conscious experience of free will also is
a social institution, because the attribution of accountability,
responsibility, etc. are the decisive building blocks for modern, open
societies. And the currently obvious answer might be interpreted by
many as having clearly anti-democratic implications: Making a complex
society work implies controlling the behavior of millions of people;
if individual human beings can control their own behavior to a much
lesser degree than we have thought in the past, if bottom-up doesn't
work, then it becomes tempting to control it top-down, by the state.
And this is the second way in which enlightenment could devour its own
children. Yes, free will truly is a dangerous question, but for
different reasons than most people think.
_________________________________________________________________

DIANE F. HALPERN
Professor of Psychology, Claremont McKenna College; Past-president
(2005), the American Psychological Association; Author, Thought and
Knowledge
[halpern100.jpg]

Choosing the sex of one's child

For an idea to be truly dangerous, it needs to have a strong and near
universal appeal. The idea of being able to choose the sex of one's
own baby is just such an idea.

Anyone who has a deep-seated and profound preference for a son or
daughter knows that this preference may not be rational and that it
may represent a prejudice better left unacknowledged about them. It is
easy to dismiss the ability to decide the sex of one's baby as
inconsequential. It is already medically feasible for a woman or
couple to choose the sex of a baby that has not yet been conceived.
There are a variety of safe methods available, such as Preimplanted
Genetic Diagnosis (PGD), so-named because it was originally designed
for couples with fertility problems, not for the purpose of selecting
the sex of one's next child. With PGD, embryos are created in a Petri
dish, tested for gender, and then implanted into the womb, so that the
baby-to-be is already identified as female or male before implantation
in the womb. The pro argument is simple: If the parents-to-be are
adults, why not? People have always wanted to be able to choose the
sex of their children. There are ancient records of medicine men and
wizened women with various herbs and assorted advice about what to do
to (usually) have a son. So, what should it matter if modern medicine
can finally deliver what old wives' tales have promised for countless
generations? Couples won't have to have a "wasted" child, such as a
second child the same sex as the first one, when they really wanted
"one of each. " If a society has too many boys for a while, who cares?
The shortage of females will make females more valuable and the market
economy will even out in time. In the mean time, families will
"balance out," each one the ideal composition as desired by the adults
in the family.

Every year for the last two decades I have asked students in my
college classes to write down the number of children they would like
to have and the order in which they ideally want to have girls and
boys. I have taught in several different countries (e.g. , Turkey,
Russia, and Mexico) and types of universities, but despite large
differences, the modal response is 2 children, first a boy, then a
girl. If students reply that they want one child, it is most often a
boy; if it is 3 children, they are most likely to want a boy, then a
girl, then a boy. The students in my classes are not a random sample
of the population: they are well educated and more likely to hold
egalitarian attitudes than the general population. Yet, if they acted
on their stated intentions, even they would have an excess of
first-borns who are male, and an excess of males overall. In a short
time, those personality characteristics associated with being either
an only-child or first-born and those associated with being male would
be so confounded, it would be difficult to separate them.

The excess of males that would result from allowing every mother or
couple to choose the sex of their next baby would not correct itself
at the societal level because at the individual level, the preference
for sons is stronger than the market forces of supply and demand. The
evidence for this conclusion comes from many sources, including
regions of the world where the ratio of young women to men is so low
that it could only be caused by selective abortion and female
infanticide (UNICEF and other sources). In some regions of rural China
there are so few women that wives are imported from the Philippines
and men move to far cities to find women to marry. In response, the
Chinese government is now offering a variety of education and cash
incentives to families with multiple daughters. There are still few
daughters being born in these rural areas where prejudice against
girls is stronger than government incentives and mandates. In India,
the number of abortions of female fetuses has increased since
sex-selective abortion was made illegal in 1994. The desire for sons
is even stronger than the threat of legal action.

In the United States, the data that show preferences for sons are more
subtle than the disparate ratios of females and males found in other
parts of the world, but the preference for sons is still strong.
Because of space limitations, I list only a few of the many indicators
that parents in the United States prefer sons: families with 2
daughters are more likely to have a third child than families with 2
sons, unmarried pregnant women who undergo ultrasound to determine the
sex of the yet unborn child are less likely to be married at the time
of the child's birth when the child is a girl than when it is a boy,
and divorced women with a son are more likely to remarry than divorced
women with a daughter.

Perhaps the only ideas more dangerous that of choosing the sex of
one's child would be trying to stop medical science from making
advances that allow such choices or allowing the government to control
the choices we can make as citizens. There are many important
questions to ponder, including how to find creative ways to reduce or
avoid negative consequences from even more dangerous alternatives.
Consider, for example, what would our world be like if there were
substantially more men than women? What if only the rich or only those
who live in "rich countries" were able to choose the sex of their
children? Is it likely that an approximately equal number of boys and
girls would be or could be selected? If not, could a society or should
a society make equal numbers of girls and boys a goal?

I am guessing that many readers of child-bearing age want to choose
the sex of their (as yet) unconceived children and can reason that
there is no harm in this practice. And, if you could also choose
intelligence, height, and hair color, would you add that too?  But
then, there are few things in life that are as appealing as the
possibility of a perfectly balanced family, which according to the
modal response means an older son and younger daughter, looking just
like an improved version of you.
_________________________________________________________________

GARY MARCUS
Psychologist, New York University; Author, The Birth of the Mind
[marcus100.jpg]

Minds, genes, and machines

Brains exist primarily to do two things, to communicate (transfer
information) and compute. This is true in every creature with a
nervous system, and no less true in the human brain. In short, the
brain is a machine. And the basic structure of that brain, biological
substrate of all things mental, is guided in no small part by
information carried in the DNA.

In the twenty-first century, these claims should no longer be
controversial. With each passing day, techniques like magnetic
resonance imaging and electrophysiological recordings from individual
neurons make it clearer that the business of the brain is information
processing, while new fields like comparative genomics and
developmental neuroembryology remove any possible doubt that genes
significantly influence both behavior and brain.

Yet there are many people, scientists and lay persons alike, who fear
or wish to deny these notions, to doubt our even reject the idea that
the mind is a machine, and that it is significantly (though of course
not exclusively) shaped by genes. Even as the religious right prays
for Intelligent Design, the academic left insinuates that merely
discussing the idea of innateness is dangerous, as in a prominent
child development manifesto that concluded:

If scientists use words like "instinct" and "innateness" in
reference to human abilities, then we have a moral responsibility
to be very clear and explicit about what we mean. If our careless,
underspecified choice of words inadvertently does damage to future
generations of children, we cannot turn with innocent outrage to
the judge and say "But your Honor, I didn't realize the word was
loaded.

A new academic journal called "Metascience" focuses on when
extra-scientific considerations influence the process of science.
Sadly, the twin questions of whether we are machines, and whether we
are constrained significantly by our biology, very much fall into this
category, questions where members of the academy (not to mention fans
of Intelligent Design) close their minds.

Copernicus put us in our place, so to to speak, by showing that our
planet is not at the center of universe; advances in biology are
putting us further in our place by showing that our brains are as much
a product of biology as any other part of our body, and by showing
that our (human) brains are built by the very same processes as other
creatures. Just as the earth is just one planet among many, from the
perspective of the toolkit of developmental biology, our brain is just
one more arrangement of molecules.
_________________________________________________________________

JARON LANIER
Computer Scientist and Musician
[jaron100.jpg]

Homuncular Flexibility

The homunculus is an approximate mapping of the human body in the
cortex. It is often visualized as a distorted human body stretched
along the top of the human brain. The tongue, thumbs, and other body
parts with extra-rich brain connections are enlarged in the
homunculus, giving it a vaguely obscene, impish character.

Long ago, in the 1980s, my colleagues and I at VPL Research built
virtual worlds in which more than one person at a time could be
present. People in a shared virtual world must be able to see each
other, as well as use their bodies together, as when two people lift a
large virtual object or ride a tandem virtual bicycle. None of this
would be possible without virtual bodies.

It was a self-evident and inviting challenge to attempt to create the
most accurate possible bodies, given the crude state of the technology
at the time. To do this, we developed full body suits covered in
sensors. A measurement made on the body of someone wearing one of
these suits, such as an aspect of the flex of a wrist, would be
applied to control a corresponding change in a virtual body. Before
long, people were dancing and otherwise goofing around in virtual
reality.

Of course there were bugs. I distinctly remember a wonderful bug that
caused my hand to become enormous, like a web of flying skyscrapers.
As is often the case, this accident led to an interesting discovery.

It turned out that people could quickly learn to inhabit strange and
different bodies and still interact with the virtual world. I became
curious how weird the body could get before the mind would become
disoriented. I played around with elongated limb segments, and strange
limb placement. The most curious experiment involved a virtual lobster
(which was lovingly modeled by Ann Lasko. ) A lobster has a trio of
little midriff arms on each side of its body. If physical human bodies
sprouted corresponding limbs, we would have measured them with an
appropriate body suit and that would have been that.

I assume it will not come as a surprise to the reader that the human
body does not include these little arms, so the question arose of how
to control them. The answer was to extract a little influence from
each of many parts of the physical body and merge these data streams
into a single control signal for a given joint in the extra lobster
limbs. A touch of human elbow twist, a dash of human knee flex; a
dozen such movements might be mixed to control the middle join of
little left limb #3. The result was that the principle elbows and
knees could still control their virtual counterparts roughly as
before, while still contributing to the control of additional limbs.

Yes, it turns out people can learn to control bodies with extra limbs!

The biologist Jim Bower, when considering this phenomenon, commented
that the human nervous system evolved through all the creatures that
preceded us in our long evolutionary line, which included some pretty
strange creatures, if you go back far enough. Why wouldn't we retain
some homuncular flexibility with a pedigree like that?

The original experiments of the 1980s were not carried out formally,
but recently it has become possible to explore the phenomenon in a far
more rigorous way. Jeremy Bailenson at Stanford has created a
marvelous new lab for studying multiple human subjects in
high-definition shared virtual worlds, and we are now planning to
repeat, improve, and extend these experiments. The most interesting
questions still concern the limits to homuncular flexibility. We are
only beginning the project of mapping how far it can go.

Why is homuncular flexibility a dangerous idea? Because the more
flexible the human brain turns out to be when it comes to adapting to
weirdness, the weirder a ride it will be able to keep up with as
technology changes in the coming decades and centuries.

Will kids in the future grow up with the experience of living in four
spatial dimensions as well as three? That would be a world with a fun
elementary school math curriculum! If you're most interested in raw
accumulation of technological power, then you might be not find this
so interesting, but if you think in terms of how human experience can
change, then this is the most fascinating stuff there is.

Homuncular flexibility isn't the only source of hints about how weird
human experience might get in the future. There also questions related
to language, memory, and other aspects of cognition, as well as
hypothetical prospects for engineering changes in the brain. But in
this one area, there's an indication of high weirdness to come, and I
find that prospect dangerous, but in a beautiful and seductive way.
"Thrilling" might be a better word.
_________________________________________________________________

W.DANIEL HILLIS
Physicist, Computer Scientist; Chairman, Applied Minds, Inc.; Author,
The Pattern on the Stone
[hillis100.jpg]
The idea that we should all share our most dangerous ideas

I don't share my most dangerous ideas. Ideas are the most powerful
forces that we can unleash upon the world, and they should not be let
loose without careful consideration of their consequences. Some ideas
are dangerous because they are false, like an idea that one race of
humans is more worthy that another, or that one religion has monopoly
on the truth. False ideas like these spread like wildfire, and have
caused immeasurable harm. They still do. Such false ideas should
obviously not be spread or encouraged, but there are also plenty of
trues idea that should not be spread: ideas about how to cause terror
and pain and chaos, ideas of how to better convince people of things
that are not true.

I have often seen otherwise thoughtful people so caught up in such an
idea that they seem unable to resist sharing it. To me, the idea that
we should all share our most dangerous ideas is, itself, a very
dangerous idea. I just hope that it never catches on.
_________________________________________________________________

NEIL GERSHENFELD
Physicist; Director, Center for Bits and Atoms, MIT; Author, Fab
[gershenfeld100.jpg]
Democratizing access to the means of invention

The elite temples of research (of the kind I've happily spent my
career in) may be becoming intellectual dinosaurs as a result of the
digitization and personalization of fabrication.

Today, with about $20k in equipment it's possible to make and measure
things from microns and microseconds on up, and that boundary is
quickly receding. When I came to MIT that was hard to do. If it's no
longer necessary to go to MIT for its facilities, then surely the
intellectual community is its real resource? But my colleagues (and I)
are always either traveling or over-scheduled; the best way for us to
see each other is to go somewhere else. Like many people, my closest
collaborators are in fact distributed around the world.

The ultimate consequence of the digitization of first communications,
then computation, and now fabrication, is to democratize access to the
means of invention. The third world can skip over the first and second
cultures and go right to developing a third culture. Rather than
today's model of researchers researching for researchees, the result
of all that discovery has been to enable a planet of creators rather
than consumers.
_________________________________________________________________

PAUL STEINHARDT
Albert Einstein Professor of Science, Princeton University
[steinhardt100.jpg]
It's a matter of time
For decades, the commonly held view among scientists has been that
space and time first emerged about fourteen billion years ago in a big
bang. According to this picture, the cosmos transformed from a nearly
uniform gas of elementary particles to its current complex hierarchy
of structure, ranging from quarks to galaxy superclusters, through an
evolutionary process governed by simple, universal physical laws. In
the past few years, though, confidence in this point of view has been
shaken as physicists have discovered finely tuned features of our
universe that seem to defy natural explanation.
The prime culprit is the cosmological constant, which astronomers have
measured to be exponentially smaller than naïve estimates would
predict. On the one hand, it is crucial that the cosmological constant
be so small or else it would cause space to expand so rapidly that
galaxies and stars would never form. On the other hand, no theoretical
mechanism has been found within the standard Big Bang picture that
would explain the tiny value.
Desperation has led to a "dangerous" idea: perhaps we live in an
anthropically selected universe. According to this view, we live in a
multiverse (a multitude of universes) in which the cosmological
constant varies randomly from one universe to the next. In most
universes, the value is incompatible with the formation of galaxies,
planets, and stars. The reason why our cosmological constant has the
value it does is because it it is one of the rare examples in which
the value happens to lie in the narrow range compatible with life.
This is the ultimate example of "unintelligent design": the multiverse
tries every possibility with reckless abandon and only very rarely
gets things "right;" that is, consistent with everything we actually
observe. It suggests that the creation of unimaginably enormous
volumes of uninhabitable space is essential to obtain a few rare
habitable spaces.
I consider this approach to be extremely dangerous for two reasons.
First, it relies on complex assumptions about physical conditions far
beyond the range of conceivable observation so it is not
scientifically verifiable. Secondly, I think it leads inevitably to a
depressing end to science. What is the point of exploring further the
randomly chosen physical properties in our tiny corner of the
multiverse if most of the multiverse is so different. I think it is
far too early to be so desperate. This is a dangerous idea that I am
simply unwilling to contemplate.
My own "dangerous" idea is more optimistic but precarious because it
bucks the current trends in cosmological thinking. I believe that the
finely tuned features may be naturally explained by supposing that our
universe is much older than we have imagined. With more time, a new
possibility emerges. The cosmological "constant" may not be constant
after all. Perhaps it is varying so slowly that it only appears to be
constant. Originally it had the much larger value that we would
naturally estimate, but the universe is so old that its value has had
a chance to relax to the tiny value measured today. Furthermore, in
several concrete examples, one finds that the evolution of the
cosmological constant slows down as its value approaches zero, so most
of the history of the universe transpires when its value is tiny, just
as we find today.
This idea that the cosmological constant is decreasing has been
considered in the past. In fact, physically plausible slow-relaxation
mechanisms have been identified. But the timing was thought to be
impossible. If the cosmological constant decreases very slowly, it
causes the expansion rate to accelerate too early and galaxies never
form. If it decreases too quickly, the expansion rate never
accelerates, which is inconsistent with recent observations. As long
as the cosmological constant has only 14 billion years to evolve,
there is no feasible solution.
But, recently, some cosmologists have been exploring the possibility
that the universe is exponentially older. In this picture, the
evolution of the universe is cyclic. The Big Bang is not the beginning
of space and time but, rather, a sudden creation of hot matter and
radiation that marks the transition from one period of expansion and
cooling to the next cycle of evolution. Each cycle might last a
trillion years, say. Fourteen billion years marks the time since the
last infusion of matter and radiation, but this is brief compared to
the total age of the universe. Each cycle lasts about a trillion years
and the number of cycles in the past may have been ten to the googol
power or more!

Then, using the slow relaxation mechanisms considered previously, it
becomes possible that the cosmological constant decreases steadily
from one cycle to the next. Since the number of cycles is likely to be
enormous, there is enough time for the cosmological constant to shrink
by an exponential factor, even though the decrease over the course of
any one cycle is too small to be undetectable. Because the evolution
slows down as the cosmological constant decreases, this is the period
when most of the cycles take place. There is no multiverse and there
is nothing special about our region of space -- we live in a typical
region at a typical time.
Remarkably, this idea is scientifically testable. The picture makes
explicit predictions about the distribution of primordial
gravitational waves and variations in temperature and density. Also,
if the cosmological constant is evolving at the slow rate suggested,
then ongoing attempts to detect a temporal variation should find no
change. So, we may enjoy speculating now about which dangerous ideas
we prefer, but ultimately it is Nature that will decide if any of them
is right. It is just a matter of time.
_________________________________________________________________

SAM HARRIS
Neuroscience Graduate Student, UCLA; Author, The End of Faith
[harriss101.jpg]
Science Must Destroy Religion

Most people believe that the Creator of the universe wrote (or
dictated) one of their books. Unfortunately, there are many books that
pretend to divine authorship, and each makes incompatible claims about
how we all must live. Despite the ecumenical efforts of many
well-intentioned people, these irreconcilable religious commitments
still inspire an appalling amount of human conflict.

In response to this situation, most sensible people advocate something
called "religious tolerance." While religious tolerance is surely
better than religious war, tolerance is not without its liabilities.
Our fear of provoking religious hatred has rendered us incapable of
criticizing ideas that are now patently absurd and increasingly
maladaptive. It has also obliged us to lie to ourselves -- repeatedly
and at the highest levels -- about the compatibility between religious
faith and scientific rationality.

The conflict between religion and science is inherent and (very
nearly) zero-sum. The success of science often comes at the expense of
religious dogma; the maintenance of religious dogma always comes at
the expense of science. It is time we conceded a basic fact of human
discourse: either a person has good reasons for what he believes, or
he does not. When a person has good reasons, his beliefs contribute to
our growing understanding of the world. We need not distinguish
between "hard" and "soft" science here, or between science and other
evidence-based disciplines like history. There happen to be very good
reasons to believe that the Japanese bombed Pearl Harbor on December
7th, 1941. Consequently, the idea that the Egyptians actually did it
lacks credibility. Every sane human being recognizes that to rely
merely upon "faith" to decide specific questions of historical fact
would be both idiotic and grotesque -- that is, until the conversation
turns to the origin of books like the bible and the Koran, to the
resurrection of Jesus, to Muhammad's conversation with the angel
Gabriel, or to any of the other hallowed travesties that still crowd
the altar of human ignorance.

Science, in the broadest sense, includes all reasonable claims to
knowledge about ourselves and the world. If there were good reasons to
believe that Jesus was born of a virgin, or that Muhammad flew to
heaven on a winged horse, these beliefs would necessarily form part of
our rational description of the universe. Faith is nothing more than
the license that religious people give one another to believe such
propositions when reasons fail. The difference between science and
religion is the difference between a willingness to dispassionately
consider new evidence and new arguments, and a passionate
unwillingness to do so. The distinction could not be more obvious, or
more consequential, and yet it is everywhere elided, even in the ivory
tower.

Religion is fast growing incompatible with the emergence of a global,
civil society. Religious faith -- faith that there is a God who cares
what name he is called, that one of our books is infallible, that
Jesus is coming back to earth to judge the living and the dead, that
Muslim martyrs go straight to Paradise, etc. -- is on the wrong side
of an escalating war of ideas. The difference between science and
religion is the difference between a genuine openness to fruits of
human inquiry in the 21st century, and a premature closure to such
inquiry as a matter of principle. I believe that the antagonism
between reason and faith will only grow more pervasive and intractable
in the coming years. Iron Age beliefs -- about God, the soul, sin,
free will, etc. -- continue to impede medical research and distort
public policy. The possibility that we could elect a U.S. President
who takes biblical prophesy seriously is real and terrifying; the
likelihood that we will one day confront Islamists armed with nuclear
or biological weapons is also terrifying, and growing more probable by
the day. We are doing very little, at the level of our intellectual
discourse, to prevent such possibilities.
In the spirit of religious tolerance, most scientists are keeping
silent when they should be blasting the hideous fantasies of a prior
age with all the facts at their disposal.

To win this war of ideas, scientists and other rational people will
need to find new ways of talking about ethics and spiritual
experience. The distinction between science and religion is not a
matter of excluding our ethical intuitions and non-ordinary states of
consciousness from our conversation about the world; it is a matter of
our being rigorous about what is reasonable to conclude on their
basis. We must find ways of meeting our emotional needs that do not
require the abject embrace of the preposterous. We must learn to
invoke the power of ritual and to mark those transitions in every
human life that demand profundity -- birth, marriage, death, etc. --
without lying to ourselves about the nature of reality.

I am hopeful that the necessary transformation in our thinking will
come about as our scientific understanding of ourselves matures. When
we find reliable ways to make human beings more loving, less fearful,
and genuinely enraptured by the fact of our appearance in the cosmos,
we will have no need for divisive religious myths. Only then will the
practice of raising our children to believe that they are Christian,
Jewish, Muslim, or Hindu be broadly recognized as the ludicrous
obscenity that it is. And only then will we stand a chance of healing
the deepest and most dangerous fractures in our world.
_________________________________________________________________

SCOTT ATRAN
Anthropologist, University of Michigan; Author, In God's We Trust
[atran.100.jpg]

Science encourages religion in the long run (and vice versa)

Ever since Edward Gibbon's Decline and Fall of the Roman Empire,
scientists and secularly-minded scholars have been predicting the
ultimate demise of religion. But, if anything, religious fervor is
increasing across the world, including in the United States, the
world's most economically powerful and scientifically advanced
society. An underlying reason is that science treats humans and
intentions only as incidental elements in the universe, whereas for
religion they are central. Science is not particularly well-suited to
deal with people's existential anxieties, including death, deception,
sudden catastrophe, loneliness or longing for love or justice. It
cannot tell us what we ought to do, only what we can do. Religion
thrives because it addresses people's deepest emotional yearnings and
society's foundational moral needs, perhaps even more so in complex
and mobile societies that are increasingly divorced from nurturing
family settings and long familiar environments.

From a scientific perspective of the overall structure and design of
the physical universe:

1. Human beings are accidental and incidental products of the material
development of the universe, almost wholly irrelevant and readily
ignored in any general description of its functioning.

Beyond Earth, there is no intelligence -- however alien or like our
own -- that is watching out for us or cares. We are alone.

2. Human intelligence and reason, which searches for the hidden traps
and causes in our surroundings, evolved and will always remain leashed
to our animal passions -- in the struggle for survival, the quest for
love, the yearning for social standing and belonging.

This intelligence does not easily suffer loneliness, anymore than it
abides the looming prospect of death, whether individual or
collective.

Religion is the hope that science is missing (something more in the
endeavor to miss nothing).

But doesn't religion impede science, and vice versa? Not necessarily.
Leaving aside the sociopolitical stakes in the opposition between
science and religion (which vary widely are not constitutive of
science or religion per se -- Calvin considered obedience to tyrants
as exhibiting trust in God, Franklin wanted the motto of the American
Republic to be "rebellion against tyranny is obedience to God"), a
crucial difference between science and religion is that factual
knowledge as such is not a principal aim of religious devotion, but
plays only a supporting role. Only in the last decade has the Catholic
Church reluctantly acknowledged the factual plausibility of
Copernicus, Galileo and Darwin. Earlier religious rejection of their
theories stemmed from challenges posed to a cosmic order unifying the
moral and material worlds. Separating out the core of the material
world would be like draining the pond where a water lily grows. A long
lag time was necessary to refurbish and remake the moral and material
connections in such a way that would permit faith in a unified
cosmology to survive.
_________________________________________________________________

MARCELO GLEISER
Physicist, Dartmouth College; Author, The Prophet and the Astronome
r [gleiser100.jpg]

Can science explain itself?

There have been many times when I asked myself if we scientists,
especially those seeking to answer "ultimate" kind of questions such
as the origin of the Universe, are not beating on the wrong drum. Of
course, by trying to answer such question as the origin of everything,
we assume we can. We plow ahead, proposing tentative models that join
general relativity and quantum mechanics and use knowledge from high
energy physics to propose models where the universe pops out of
nothing, no energy required, due to a random quantum fluctuation. To
this, we tag along the randomness of fundamental constants, saying
that their values are the way they are due to an accident: other
universes may well have other values of the charge and mass of the
electron and thus completely different properties. So, our universe
becomes this very special place where things "conspire" to produce
galaxies, stars, planets, and life.

What if this is all bogus? What if we look at sciece as a narrative, a
description of the world that has limitations based on its structure?
The constants of Nature are the letters of the alphabet, the laws are
the grammar rules and we build these descriptions through the guiding
hand of the so-called scientific method. Period. To say things are
this way because otherwise we wouldn't be here to ask the question is
to miss the point altogether: things are this way because this is the
story we humans tell based on the way we see the world and explain it.

If we take this to the extreme, it means that we will never be able to
answer the question of the origin of the Universe, since it implicitly
assumes that science can explain itself. We can build any cool and
creative models we want using any marriage of quantum mechanics and
relativity, but we still won't understand why these laws and not
others. In sense, this means that our science is our science and not
something universally true as many believe it is. This is not bad at
all, given what we can do with it, but it does place limits on
knowledge. Which may also not be a bad thing as well. It's OK not to
know everything, it doesn't make science weaker. Only more human.
_________________________________________________________________

DOUGLAS RUSHKOFF
Media Analyst; Documentary Writer; Author, Get Back in the Box :
Innovation from the Inside Out
[rushkoff100.jpg]

Open Source Currency

It's not only dangerous and by most counts preposterous; it's
happening. Open Source or, in more common parlance, "complementary"
currencies are collaboratively established units representing hours of
labor that can be traded for goods or services in lieu of centralized
currency. The advantage is that while the value of centralized
currency is based on its scarcity, the bias of complementary or local
currencies is towards their abundance.

So instead of having to involve the Fed in every transaction -- and
using money that requires being paid back with interest -- we can
invent our own currencies and create value with our labor. It's what
the Japanese did at the height of the recession. No, not the Japanese
government, but unemployed Japanese people who couldn't afford to pay
healthcare costs for their elder relatives in distant cities. They
created a currency through which people could care for someone else's
grandmother, and accrue credits for someone else to take care of
theirs.

Throughout most of history, complementary currencies existed alongside
centralized currency. While local currency was used for labor and
local transactions, centralized currencies were used for long distance
and foreign trade. Local currencies were based on a model of abundance
-- there was so much of it that people constantly invested it. That's
why we saw so many cathedrals being built in the late middle ages, and
unparalleled levels of investment in infrastructure and maintenance.
Centralized currency, on the other hand, needed to retain value over
long distances and periods of time, so it was based on precious and
scarce resources, such as gold.

The problem started during the Renaissance: as kings attempted to
centralize their power, most local currencies were outlawed. This new
monopoly on currency reduced entire economies into scarcity engines,
encouraging competition over collaboration, protectionism over
sharing, and fixed commodities over renewable resources. Today, money
is lent into existence by the Fed or another central bank -- and paid
back with interest.

This cash is a medium; and like any medium, it has certain biases. The
money we use today is just one model of money. Turning currency into
an collaborative phenomenon is the final frontier in the open source
movement. It's what would allow for an economic model that could
support a renewable energies industry, a way for companies such as
Wal-Mart to add value to the communities it currently drains, and a
way of working with money that doesn't have bankruptcy built in as a
given circumstanc
_________________________________________________________________

JUDITH RICH HARRIS
Independent Investigator and Theoretician; Author, The Nurture
Assumption
[harris101.jpg]

The idea of zero parental influence

Is it dangerous to claim that parents have no power at all (other than
genetic) to shape their child's personality, intelligence, or the way
he or she behaves outside the family home? More to the point, is this
claim false? Was I wrong when I proposed that parents' power to do
these things by environmental means is zero, nada, zilch?

A confession: When I first made this proposal ten years ago, I didn't
fully believe it myself. I took an extreme position, the null
hypothesis of zero parental influence, for the sake of scientific
clarity. Making myself an easy target, I invited the establishment --
research psychologists in the academic world -- to shoot me down. I
didn't think it would be all that difficult for them to do so. It was
clear by then that there weren't any big effects of parenting, but I
thought there must be modest effects that I would ultimately have to
acknowledge.

The establishment's failure to shoot me down has been nothing short of
astonishing. One developmental psychologist even admitted, one year
ago on this very website, that researchers hadn't yet found proof that
"parents do shape their children," but she was still convinced that
they will eventually find it, if they just keep searching long enough.

Her comrades in arms have been less forthright. "There are dozens of
studies that show the influence of parents on children!" they kept
saying, but then they'd somehow forget to name them -- perhaps because
these studies were among the ones I had already demolished (by showing
that they lacked the necessary controls or the proper statistical
analyses). Or they'd claim to have newer research that provided an
airtight case for parental influence, but again there was a catch: the
work had never been published in a peer-reviewed journal. When I
investigated, I could find no evidence that the research in question
had actually been done or, if done, that it had produced the results
that were claimed for it. At most, it appeared to consist of
preliminary work, with too little data to be meaningful (or
publishable).

Vaporware, I call it. Some of the vaporware has achieved mythic
status. You may have heard of Stephen Suomi's experiment with nervous
baby monkeys, supposedly showing that those reared by "nurturant"
adoptive monkey mothers turn into calm, socially confident adults. Or
of Jerome Kagan's research with nervous baby humans, supposedly
showing that those reared by "overprotective" (that is, nurturant)
human mothers are more likely to remain fearful.

Researchers like these might well see my ideas as dangerous. But is
the notion of zero parental influence dangerous in any other sense? So
it is alleged. Here's what Frank Farley, former president of the
American Psychological Association, told a journalist in 1998:

[Harris's] thesis is absurd on its face, but consider what might
happen if parents believe this stuff! Will it free some to mistreat
their kids, since "it doesn't matter"? Will it tell parents who are
tired after a long day that they needn't bother even paying any
attention to their kid since "it doesn't matter"?

Farley seems to be saying that the only reason parents are nice to
their children is because they think it will make the children turn
out better! And that if parents believed that they had no influence at
all on how their kids turn out, they are likely to abuse or neglect
them.

Which, it seems to me, is absurd on its face. Most chimpanzee mothers
are nice to their babies and take good care of them. Do chimpanzees
think they're going to influence how their offspring turn out? Doesn't
Frank Farley know anything at all about evolutionary biology and
evolutionary psychology?

My idea is viewed as dangerous by the powers that be, but I don't
think it's dangerous at all. On the contrary: if people accepted it,
it would be a breath of fresh air. Family life, for parents and
children alike, would improve. Look what's happening now as a result
of the faith, obligatory in our culture, in the power of parents to
mold their children's fragile psyches. Parents are exhausting
themselves in their efforts to meet their children's every demand, not
realizing that evolution designed offspring -- nonhuman animals as
well as humans -- to demand more than they really need. Family life
has become phony, because parents are convinced that children need
constant reassurances of their love, so if they don't happen to feel
very loving at a particular time or towards a particular child, they
fake it. Praise is delivered by the bushel, which devalues its worth.
Children have become the masters of the home.

And what has all this sacrifice and effort on the part of parents
bought them? Zilch. There are no indications that children today are
happier, more self-confident, less aggressive, or in better mental
health than they were sixty years ago, when I was a child -- when
homes were run by and for adults, when physical punishment was used
routinely, when fathers were generally unavailable, when praise was a
rare and precious commodity, and when explicit expressions of parental
love were reserved for the deathbed.

Is my idea dangerous? I've never condoned child abuse or neglect; I've
never believed that parents don't matter. The relationship between a
parent and a child is an important one, but it's important in the same
way as the relationship between married partners. A good relationship
is one in which each party cares about the other and derives happiness
from making the other happy. A good relationship is not one in which
one party's central goal is to modify the other's personality.

I think what's really dangerous -- perhaps a better word is tragic --
is the establishment's idea of the all-powerful, and hence
all-blamable, parent.
_________________________________________________________________

ALUN ANDERSON
Senior Consultant, New Scientist
[andersona100.jpg]

Brains cannot become minds without bodies

A common image for popular accounts of the "The Mind" is a brain in a
bell jar. The message is that inside that disembodied lump of neural
tissue is everything that is you.

It's a scary image but misleading. A far more dangerous idea is that
brains cannot become minds without bodies, that two-way interactions
between mind and body are crucial to thought and health, and the brain
may partly think in terms of the motor actions it encodes for the
body's muscles to carry out.

We've probable fallen for disembodied brains because of the academic
tendency to worship abstract thought. If we take a more democratic
view of the whole brain we'd find far more of it being used for
planning and controlling movement than for cogitation. Sports writers
get it right when they describe stars of football or baseball as
"geniuses"! Their genius requires massive brain power and a superb
body, which is perhaps one better than Einstein.

The "brain-body" view is dangerous because it requires many scientists
to change the way they think: it allows back common sense interactions
between brain and body that medical science feels uncomfortable with,
makes more sense of feelings like falling in love and requires a
different approach for people who are trying to create machines with
human-like intelligence. And if this all sounds like mere assertion,
there's plenty of interesting research out there to back it up.

Interactions between mind and body come out strongly in the surprising
links between status and health. Michael Marmot's celebrated studies
show that the lower you are in the pecking order, the worse your
health is likely to be. You can explain away only a small part of the
trend from poorer access to healthcare, or poorer food or living
conditions. For Marmot, the answer lies in "the impact over how much
control you have over life circumstances". The important message is
that state of mind -- perceived status -- translates into state of
body.

The effect of placebos on health delivers a similar message. Trust and
belief are often seen as negative in science and the placebo effect is
dismissed as a kind of "fraud" because it relies on the belief of the
patient. But the real wonder is that faith can work. Placebos can
stimulate the release of pain-relieving endorphins and affect neuronal
firing rates in people with Parkinson's disease.

Body and mind interact too in the most intimate feelings of love and
bonding. Those interactions have been best explored in voles where two
hormones, oxytocin and vasopressin, are critical. The hormones are
released as a result of the "the extended tactile pleasures of
mating", as researchers describe it, and hit pleasure centres in the
brain which essentially "addict" sexual partners to one another.

Humans are surely more cerebral. But brain scans of people in love
show heightened activity where there are lots of oxytocin and
vasopressin receptors. Oxytocin levels rise during orgasm and sexual
arousal, as they do from touching and massage. There are defects in
oxytocin receptors associated with autism. And the hormone boosts the
feeling that you can trust others, which is key part of intimate
relations. In a recent laboratory "investment game" many investors
would trust all their money to a stranger after a puff of an oxytocin
spray.

These few stories show the importance of the interplay of minds and
hormonal signals, of brains and bodies. This idea has been taken to a
profound level in the well-known studies of Anthony Damasio, who finds
that emotional or "gut feelings" are essential to making decisions.
"We don't separate emotion from cognition like layers in a cake," says
Damasio, "Emotion is in the loop of reason all the time."

Indeed, the way in which reasoning is tied to body actions may be
quite counter-intuitive. Giacomo Rizzolatti discovered "mirror
neurones" in a part of the monkey brain responsible for planning
movement. These nerve cells fire both when a monkey performs an action
(like picking up a peanut) and when the monkey sees someone else do
the same thing. Before long, similar systems were found in human
brains too.

The surprising conclusion may be that when we see someone do
something, the same parts of our brain are activated "as if" we were
doing it ourselves. We may know what other people intend and feel by
simulating what they are doing within the same motor areas of our own
brains.

As Rizzolatti puts it, "the fundamental mechanism that allows us a
direct grasp of the mind of others is not conceptual reasoning but
direct simulation of the observed events through the mirror
mechanism." Direct grasp of others' minds is a special ability that
paves the way for our unique powers of imitation which in turn have
allowed culture to develop.

If bodies and their interaction with brain and planning for action in
the world are so central to human kinds of mind, where does that leave
the chances of creating an intelligent "disembodied mind" inside a
computer? Perhaps the Turing test will be harder than we think. We may
build computers that understand language but which cannot say anything
meaningful, at least until we can give them "extended tactile
experiences". To put it another way, computers may not be able to make
sense until they can have sex.
_________________________________________________________________

TODD E. FEINBERG, M.D.
Psychiatrist and Neurologist, Albert Einstein College of Medicine;
Author, Altered Egos
[feinberg100.jpg]

Myths and fairy tales are not true

"Myths and fairy tales are not true." There is no Easter Bunny, there
is no Santa Claus, and Moses may never have existed. Worse yet, I have
increasing difficulty believing that there is a higher power ruling
the universe. This is my dangerous idea. It is not a dangerous idea to
those who do not share my particular world view or personal fears; to
others it may seem trivially true. But for me, this idea is downright
horrifying.

I came to ponder this idea through my neurological examination of
patients with brain damage that causes a disturbance in their self
concepts and ego functions.
Some of theses patients develop, in the course of their illness and
recovery (or otherwise), disturbances of self and personal relatedness
that create enduring delusions and metaphorical confabulations
regarding their bodies, their relationships with loved ones, and their
personal experiences. A patient I examined with a right hemisphere
stroke and paralyzed left arm claimed that the arm was actually
severed from his brother's body by gang members, thrown in the East
river, and later attached to the patient's shoulder. Another patient
with a ruptured brain aneurysm and amnesia who denied his disabilities
claimed he was planning to adopt (a phantom) child who was in need of
medical assistance.

These personal narratives, produced by patients in altered
neurological states and therefore without the constraints imposed by a
fully functioning consciousness, have a dream-like quality, and
constitute "personal myths" that express the patient's beliefs about
themselves. The patient creates a metaphor in which personal
experiences are crystallized in a metaphor in the form of an external
real or fictitious persons, objects, places, or events. When this
occurs, the metaphor serves as a symbolic representation or
externalization of the patient's feelings that the patient does not
realize originate from within the self.

There is an intimate relationship between my patients' narratives and
socially endorsed fairy tales and mythologies. This is particularly
apparent when mythologies deal with themes relating to a loss of self,
personal identity or death. For many people, the notion of personal
death is extremely difficult to grasp and fully accommodate within
one's self image. For many, in order to go on with life, death must be
denied. Therefore, to help the individual deal with the prospect of
the inevitability of personal death, cultural and religious
institutions provide metaphors of everlasting life. Just as my
patients adapt to difficult realities by creating metaphorical
substitutes, it appears to me that beliefs in angels, deities and
eternal souls can be understood in part as wish fulfilling metaphors
for an unpleasant reality that most of us cannot fully comprehend and
accept.

Unfortunately, just as my patients' myths are not true, neither are
those that I was brought up to believe in.
_________________________________________________________________

STEWART BRAND
Founder, Whole Earth Catalog, cofounder; The Well; cofounder, Global
Business Network; Author, How Buildings Learn
[brand100.jpg]

What if public policy makers have an obligation to engage historians,
and historians have an obligation to try to help?

All historians understand that they must never, ever talk about the
future. Their discipline requires that they deal in facts, and the
future doesn't have any yet. A solid theory of history might be able
to embrace the future, but all such theories have been discredited.
Thus historians do not offer, and are seldom invited, to take part in
shaping public policy. They leave that to economists.

But discussions among policy makers always invoke history anyway,
usually in simplistic form. "Munich" and "Vietnam," devoid of detail
or nuance, stand for certain kinds of failure. "Marshall Plan" and
"Man on the Moon" stand for certain kinds of success. Such totemic
invocation of history is the opposite of learning from history, and
Santayana's warning continues in force, that those who fail to learn
from history are condemned to repeat it.

A dangerous thought: What if public policy makers have an obligation
to engage historians, and historians have an obligation to try to
help?

And instead of just retailing advice, go generic. Historians could set
about developing a rigorous sub-discipline called "Applied History."

There is only one significant book on the subject, published in 1988.
Thinking In Time: The Uses of Hustory for Decision Makers was written
by the late Richard Neustadt and Ernest May, who long taught a course
on the subject at Harvard's Kennedy School of Government. (A course
called "Reasoning from History" is currently taught there by Alexander
Keyssar.)

Done wrong, Applied History could paralyze public decision making and
corrupt the practice of history -- that's the danger. But done right,
Applied History could make decision making and policy far more
sophisticated and adaptive, and it could invest the study of history
with the level of consequence it deserves.
_________________________________________________________________

JARED DIAMOND
Biologist; Geographer, UCLA; Author, Collapse
[diamond100.jpg]

The evidence that tribal peoples often damage their environments and
make war.

Why is this idea dangerous? Because too many people today believe that
a reason not to mistreat tribal people is that they are too nice or
wise or peaceful to do those evil things, which only we evil citizens
of state governments do. The idea is dangerous because, if you believe
that that's the reason not to mistreat tribal peoples, then proof of
the idea's truth would suggest that it's OK to mistreat them. In fact,
the evidence seems to me overwhelming that the dangerous idea is true.
But we should treat other people well because of ethical reasons, not
because of naïve anthropological theories that will almost surely
prove false.
_________________________________________________________________

LEONARD SUSSKIND
Physicist, Stanford University; Author, The Cosmic Landscape
[susskind100.jpg]

The "Landscape"

I have been accused of advocating an extremely dangerous idea.

According to some people, the "Landscape" idea will eventually ensure
that the forces of intelligent design (and other unscientific
religious ideas) will triumph over true science. From one of my most
distinguished colleagues:

From a political, cultural point of view, it's not that these
arguments are religious but that they denude us from our historical
strength in opposing religion.

Others have expressed the fear that my ideas, and those of my friends,
will lead to the end of science (methinks they overestimate me). One
physicist calls it "millennial madness."

And from another quarter, Christoph Schönborn, Cardinal Archbishop of
Vienna has accused me of "an abdication of human intelligence."

As you may have guessed the idea in question is the Anthropic
Principle: a principle that seeks to explain the laws of physics, and
the constants of nature, by saying, "If they (the laws of physics)
were different, intelligent life would not exist to ask why laws of
nature are what they are."

On the face of it, the Anthropic Principle is far too silly to be
dangerous. It sounds no more sensible than explaining the evolution of
the eye by saying that unless the eye evolved, there would be no one
to read this page. But the A.P. is really shorthand for a rich set of
ideas that are beginning to influence and even dominate the thinking
of almost all serious theoretical physicists and cosmologists.

Let me strip the idea down to its essentials. Without all the
philosophical baggage, what it says is straightforward: The universe
is vastly bigger than the portion that we can see; and, on a very
large scale it is as varied as possible. In other words, rather than
being a homogeneous, mono-colored blanket, it is a crazy-quilt
patchwork of different environments. This is not an idle speculation.
There is a growing body of empirical evidence confirming the
inflationary theory of cosmology, which underlies the hugeness and
hypothetical diversity of the universe.

Meanwhile string theorists, much to the regret of many of them, are
discovering that the number of possible environments described by
their equations is far beyond millions or billions. This enormous
space of possibilities, whose multiplicity may exceed ten to the 500
power, is called the Landscape. If these things prove to be true, then
some features of the laws of physics (maybe most) will be local
environmental facts rather than written-in-stone laws: laws that could
not be otherwise. The explanation of some numerical coincidences will
necessarily be that most of the multiverse is uninhabitable, but in
some very tiny fraction conditions are fine-tuned enough for
intelligent life to form.

That's the dangerous idea and it is spreading like a cancer.

Why is it that so many physicists find these ideas alarming? Well,
they do threaten physicists' fondest hope, the hope that some
extraordinarily beautiful mathematical principle will be discovered: a
principle that would completely and uniquely explain every detail of
the laws of particle physics (and therefore nuclear, atomic, and
chemical physics). The enormous Landscape of Possibilities inherent in
our best theory seems to dash that hope.

What further worries many physicists is that the Landscape may be so
rich that almost anything can be found: any combination of physical
constants, particle masses, etc. This, they fear, would eliminate the
predictive power of physics. Environmental facts are nothing more than
environmental facts. They worry that if everything is possible, there
will be no way to falsify the theory -- or, more to the point, no way
to confirm it. Is the danger real? We shall see.

Another danger that some of my colleagues perceive, is that if we
"senior physicists" allow ourselves to be seduced by the Anthropic
Principle, young physicists will give up looking for the "true" reason
for things, the beautiful mathematical principle. My guess is that if
the young generation of scientists is really that spineless, then
science is doomed anyway. But as we know, the ambition of all young
scientists is to make fools of their elders.

And why does the Cardinal Archbishop Schönborn find the Landscape and
the Multiverse so dangerous. I will let him explain it himself:

Now, at the beginning of the 21st century, faced with scientific
claims like neo-Darwinism and the multiverse hypothesis in
cosmology invented to avoid the overwhelming evidence for purpose
and design found in modern science, the Catholic Church will again
defend human nature by proclaiming that the immanent design evident
in nature is real. Scientific theories that try to explain away the
appearance of design as the result of 'chance and necessity' are
not scientific at all, but, as John Paul put it, an abdication of
human intelligence.

Abdication of human intelligence? No, it's called science.
_________________________________________________________________

GERALD HOLTON
Mallinckrodt Research Professor of Physics and Research Professor of
History of Science, Harvard University; Author, Thematic Origins of
Scientific Thought
[holton100.jpg]

The medicination of the ancient yearning for immortality

Since the major absorption of scientific method into the research and
practice of medicine in the 1860s, the longevity curve, at least for
the white population in industrial countries, took off and has
continued fairly constantly. That has been on the whole a benign
result, and has begun to introduce the idea of tolerably good health
as one of the basic Human Rights. But one now reads of projections to
200 years, and perhaps more. The economic, social and human costs of
the increasing fraction of very elderly citizens have begun to be
noticed already.

To glimpse one of the possible results of the continuing projection of
the longevity curve in terms of a plausible scenario: The matriarch of
the family, on her deathbed at age 200, is being visited by the
surviving, grieving family members: a son and a daughter, each of age
of about 180, plus /their/ three "children" , around 150-160 years old
each, plus all their offspring, in the range of 120 to 130, and so
on..... A touching picture. But what are all the "costs" involved?
_________________________________________________________________

CHARLES SEIFE
Professor of Journalism, New York University; formerly journalist,
Science magazine; Author, Zero: The Biography Of A Dangerous Idea
[seife100.jpg]


Nothing

Nothing can be more dangerous than nothing.

Humanity's always been uncomfortable with zero and the void. The
ancient Greeks declared them unnatural and unreal. Theologians argued
that God's first act was to banish the void by the act of creating the
universe ex nihilo, and Middle-Ages thinkers tried to ban zero and the
other Arabic "ciphers." But the emptiness is all around us -- most of
the universe is void. Even as we huddle around our hearths and invent
stories to convince ourselves that the cosmos is warm and full and
inviting, nothingness stares back at us with empty eye sockets.
_________________________________________________________________

KARL SABBAGH
Writer and Television Producer; Author, The Riemann Hypothesis
[sabbagh100.jpg]

The human brain and its products are incapable of understanding the
truths about the universe

Our brains may never be well-enough equipped to understand the
universe and we are fooling ourselves if we think they will.
Why should we expect to be able eventually to understand how the
universe originated, evolved, and operates? While human brains are
complex and capable of many amazing things, there is not necessarily
any match between the complexity of the universe and the complexity of
our brains, any more than a dog's brain is capable of understanding
every detail of the world of cats and bones, or the dynamics of stick
trajectories when thrown. Dogs get by and so do we, but do we have a
right to expect that the harder we puzzle over these things the nearer
we will get to the truth? Recently I stood in front of a three metre
high model of the Ptolemaic universe in the Museum of the History of
Science in Florence and I remembered how well that worked as a
representation of the motions of the planets until Copernicus and
Kepler came along.

Nowadays, no element of the theory of giant interlocking cogwheels at
work is of any use in understanding the motions of the stars and
planets (and indeed Ptolemy himself did not argue that the universe
really was run by giant cogwheels). Occam's Razor is used to compare
two theories and allow us to choose which is more likely to be 'true'
but hasn't it become a comfort blanket whenever we are faced with
aspects of the universe that seem unutterably complex -- string theory
for example. But is string theory just the Ptolemaic clockwork de nos
jours? Can it be succeeded by some simplification or might the truth
be even more complex and far beyond the neural networks of our brain
to understand?

The history of science is littered with examples of two types of
knowledge advancement. There is imperfect understanding that 'sort of'
works, and is then modified and replaced by something that works
better, without destroying the validity of the earlier theory.
Newton's theory of gravitation replaced by Einstein. Then there is
imperfect understanding that is replaced by some new idea which owes
nothing to older ones. Phlogiston theory, the ether, and so on are
replaced by ideas which save the phenomena, lead to predictions, and
convince us that they are nearer the truth. Which of these categories
really covers today's science? Could we be fooling ourselves by
playing around with modern phlogiston?

And even if we are on the right lines in some areas, how much of what
there is to be understood in the universe do we really understand?
Fifty percent? Five percent? The dangerous idea is that perhaps we
understand half a percent and all the brain and computer power we can
muster may take us up to one or two percent in the lifetime of the
human race.

Paradoxically, we may find that the only justification for pursuing
scientific knowledge is for the practical applications it leads to --
a view that runs contrary to the traditional support of knowledge for
knowledge's sake. And why is this paradoxical? Because the most
important advances in technology have come out of research that was
not seeking to develop those advances but to understand the universe.
So if my dangerous idea is right -- that the human brain and its
products are actually incapable of understanding the truths about the
universe -- it will not -- and should not -- lead to any diminution at
all in our attempts to do so. Which means, I suppose, that it's not
really dangerous at all.
_________________________________________________________________

RUPERT SHELDRAKE
Biologist, London; Author of The Presence of the Past
[sheldrake100.jpg]

A sense of direction involving new scientific principles

We don't understand animal navigation.

No one knows how pigeons home, or how swallow migrate, or how green
turtles find Ascension Island from thousands of miles away to lay
their eggs. These kinds of navigation involve more than following
familiar landmarks, or orientating in a particular compass direction;
they involve an ability to move towards a goal.

Why is this idea dangerous? Don't we just need a bit more time to
explain navigation in terms of standard physics, genes, nerve impulses
and brain chemistry? Perhaps.

But there is a dangerous possibility that animal navigation may not be
explicable in terms of present-day physics. Over and above the known
senses, some species of animals may have a sense of direction that
depends on their being attracted towards their goals through direct
field-like connections. These spatial attractors are places with which
the animals themselves are already familiar, or with which their
ancestors were familiar.

What are the facts? We know more about pigeons than any other species.
Everyone agrees that within familiar territory, especially within a
few miles of their home, pigeons can use landmarks; for example, they
can follow roads. But using familiar landmarks near home cannot
explain how racing pigeons return across unfamiliar terrain from six
hundred miles away, even flying over the sea, as English pigeons do
when they are raced from Spain.

Charles Darwin, himself a pigeon fancier, was one of the first to
suggest a scientific hypothesis for pigeon homing. He proposed that
they might use a kind of dead reckoning, registering all the twists
and turns of the outward journey. This idea was tested in the
twentieth century by taking pigeons away from their loft in closed
vans by devious routes. They still homed normally. So did birds
transported on rotating turntables, and so did birds that had been
completely anaesthetized during the outward journey.

What about celestial navigation? One problem for hypothetical solar or
stellar navigation systems is that many animals still navigate in
cloudy weather. Another problem is that celestial navigation depends
on a precise time sense. To test the sun navigation theory, homing
pigeons were clock-shifted by six or twelve hours and taken many miles
from their lofts before being released. On sunny days, they set off in
the wrong direction, as if a clock-dependent sun compass had been
shifted. But in spite of their initial confusion, the pigeons soon
corrected their courses and flew homewards normally.

Two main hypotheses remain: smell and magnetism. Smelling the home
position from hundreds of miles away is generally agreed to be
implausible. Even the most ardent defenders of the smell hypothesis
(the Italian school of Floriano Papi and his colleagues) concede that
smell navigation is unlikely to work at distances over 30 miles.
That leaves a magnetic sense. A range of animal species can detect
magnetic fields, including termites, bees and migrating birds. But
even if pigeons have a compass sense, this cannot by itself explain
homing. Imagine that you are taken to an unfamiliar place and given a
compass. You will know from the compass where north is, but not where
home is.
The obvious way of dealing with this problem is to postulate complex
interactions between known sensory modalities, with multiple back-up
systems. The complex interaction theory is safe, sounds sophisticated,
and is vague enough to be irrefutable. The idea of a sense of
direction involving new scientific principles is dangerous, but it may
be inevitable.
_________________________________________________________________

TOR NØRRETRANDERS
Science Writer; Consultant; Lecturer, Copenhagen; Author, The User
Illusion
[norretranders100.jpg]

Social Relativity

Relativity is my dangerous idea. Well, neither the special nor the
general theory of relativity, but what could be called social
relativity: The idea that the only thing that matters to human
well-being is how one stands relatively to others. That is, only the
relative wealth of a person is important, the absolute level does not
really matter, as soon as everyone is above the level of having their
immediate survival needs fulfilled.

There is now strong and consistent evidence (from fields such as
microeconomics, experimental economics, psychology, sociolology and
primatology) that it doesn't really matter how much you earn, as long
as you earn more than your wife's sister's husband. Pioneers in these
discussions are the late British social thinker Fred Hirsch and the
American economist Robert Frank.

Why is this idea dangerous? It seems to imply that equality will never
become possible in human societies: The driving force is always to get
ahead of the rest. Nobody will ever settle down and share.

So it would seem that we are forever stuck with poverty, disease and
unjust hierarchies. This idea could make the rich and the smart lean
back and forget about the rest of the pack.

But it shouldn't.

Inequality may subjectively seem nice to the rich, but objectively it
is not in their interest.

A huge body of epidemiological evidence points to the fact that
inequality is in fact the prime cause for human disease. Rich people
in poor countries are more healthy than poor people in rich countries,
even though the latter group has more resources in absolute terms.
Societies with strong gradients of wealth show higher death rates and
more disease, also amongst the people at the top. Pioneers in these
studies are the British epidemiologists Michael Marmot and Richard
Wilkinson.

Poverty means spreading of disease, degradation of ecosystems and
social violence and crime -- which are also bad for the rich.
Inequality means stress to everyone.

Social relativity then boils down to an illusion: It seems nice to me
to be better off than the rest, but in terms of vitals -- survival,
good health -- it is not.

Believing in social relativity can be dangerous to your health.
_________________________________________________________________

JOHN HORGAN
Science Writer; Author, Rational Mysticism
[horgan100.jpg]

We Have No Souls

The Depressing, Dangerous Hypothesis: We Have No Souls.
This year's Edge question makes me wonder: Which ideas pose a greater
potential danger? False ones or true ones? Illusions or the lack
thereof? As a believer in and lover of science, I certainly hope that
the truth will set us free, and save us, but sometimes I'm not so
sure.

The dangerous, probably true idea I'd like to dwell on in this Holiday
season is that we humans have no souls. The soul is that core of us
that supposedly transcends and even persists beyond our physicality,
lending us a fundamental autonomy, privacy and dignity. In his 1994
book The Astonishing Hypothesis: The Scientific Search for the Soul,
the late, great Francis Crick argued that the soul is an illusion
perpetuated, like Tinkerbell, only by our belief in it. Crick opened
his book with this manifesto: "'You,' your joys and your sorrows, your
memories and your ambitions, your sense of personal identity and free
will, are in fact no more than the behavior of a vast assembly of
nerve cells and their associated molecules." Note the quotation marks
around "You." The subtitle of Crick's book was almost comically
ironic, since he was clearly trying not to find the soul but to crush
it out of existence.

I once told Crick that "The Depressing Hypothesis" would have been a
more accurate title for his book, since he was, after all, just
reiterating the basic, materialist assumption of modern neurobiology
and, more broadly, all of science. Until recently, it was easy to
dismiss this assumption as moot, because brain researchers had made so
little progress in tracing cognition to specific neural processes.
Even self-proclaimed materialists -- who accept, intellectually, that
we are just meat machines -- could harbor a secret, sentimental belief
in a soul of the gaps. But recently the gaps have been closing, as
neuroscientists -- egged on by Crick in the last two decades of his
life--have begun unraveling the so-called neural code, the software
that transforms electrochemical pulses in the brain into perceptions,
memories, decisions, emotions, and other constituents of
consciousness.

I've argued elsewhere that the neural code may turn out to be so
complex that it will never be fully deciphered. But 60 years ago, some
biologists feared the genetic code was too complex to crack. Then in
1953 Crick and Watson unraveled the structure of DNA, and researchers
quickly established that the double helix mediates an astonishingly
simple genetic code governing the heredity of all organisms. Science's
success in deciphering the genetic code, which has culminated in the
Human Genome Project, has been widely acclaimed -- and with good
reason, because knowledge of our genetic makeup could allow us to
reshape our innate nature. A solution to the neural code could give us
much greater, more direct control over ourselves than mere genetic
manipulation.

Will we be liberated or enslaved by this knowledge? Officials in the
Pentagon, the major funder of neural-code research, have openly
broached the prospect of cyborg warriors who can be remotely
controlled via brain implants, like the assassin in the recent remake
of "The Manchurian Candidate." On the other hand, a cult-like group of
self-described "wireheads" looks forward to the day when implants
allow us to create our own realities and achieve ecstasy on demand.

Either way, when our minds can be programmed like personal computers,
then, perhaps, we will finally abandon the belief that we have
immortal, inviolable souls, unless, of course, we program ourselves to
believe.
_________________________________________________________________

ERIC R. KANDEL
Biochemist and University Professor, Columbia University; Recipient,
The Nobel Prize, 2000; Author, Cellular Basis of Behavior
[kandel100.jpg]

Free will is exercised unconsciously, without awareness

It is clear that consciousness is central to understanding human
mental processes, and therefore is the holy grail of modern
neuroscience. What is less clear is that much of our mental processes
are unconscious and that these unconscious processes are as important
as conscious mental processes for understanding the mind. Indeed most
cognitive processes never reach consciousness.

As Sigmund Freud emphasized at the beginning of the 20th century most
of our perceptual and cognitive processes are unconscious, except
those that are in the immediate focus of our attention. Based on these
insights Freud emphasized that unconscious mental processes guide much
of human behavior.

Freud's idea was a natural extension of the notion of unconscious
inference proposed in the 1860s by Hermann Helmholtz, the German
physicist turned neural scientist. Helmholtz was the first to measure
the conduction of electrical signals in nerves. He had expected it to
be as the speed of light, fast as the conduction of electricity in
copper cables, and found to his surprise that it was much slower, only
about 90m sec. He then examined the reaction time, the time it takes a
subject to respond to a consciously a perceived stimulus, and found
that it was much, much slower than even the combined conduction times
required for sensory and motor activities.

This caused Helmholz to argue that a great deal of brain processing
occurred unconsciously prior to conscious perception of an object.
Helmholtz went on to argue that much of what goes on in the brain is
not represented in consciousness and that the perception of objects
depends upon "unconscious inferences" made by the brain, based on
thinking and reasoning without awareness. This view was not accepted
by many brain scientists who believed that consciousness is necessary
for making inferences. However, in the 1970s a number of experiments
began to accumulate in favor of the idea that most cognitive processes
that occur in the brain never enter consciousness.

Perhaps the most influential of these experiments were those carried
out by Benjamin Libet in 1986. Libet used as his starting point a
discovery made by the German neurologist Hans Kornhuber. Kornhuber
asked volunteers to move their right index finger. He then measured
this voluntary movement with a strain gauge while at the same time
recording the electrical activity of the brain by means of an
electrode on the skull. After hundreds of trials, Kornhuber found
that, invariably, each movement was preceded by a little blip in the
electrical record from the brain, a spark of free will! He called this
potential in the brain the "readiness potential" and found that it
occurred one second before the voluntary movement.

Libet followed up on Kornhuber's finding with an experiment in which
he asked volunteers to lift a finger whenever they felt the urge to do
so. He placed an electrode on a volunteer's skull and confirmed a
readiness potential about one second before the person lifted his or
her finger. He then compared the time it took for the person to will
the movement with the time of the readiness potential.

Amazingly, Libet found that the readiness potential appeared not
after, but 200 milliseconds before a person felt the urge to move his
or her finger! Thus by merely observing the electrical activity of the
brain, Libet could predict what a person would do before the person
was actually aware of having decided to do it.

These experiments led to the radical insight that by observing another
person's brain activity, one can predict what someone is going to do
before he is aware that he has made the decision to do it. This
finding has caused philosophers of mind to ask: If the choice is
determined in the brain unconsciously before we decide to act, where
is free will?

Are these choices predetermined? Is our experience of freely willing
our actions only an illusion, a rationalization after the fact for
what has happened? Freud, Helmholtz and Libet would disagree and argue
that the choice is freely made but that it happens without our
awareness. According to their view, the unconscious inference of
Helmholtz also applies to decision-making.

They would argue that the choice is made freely, but not consciously.
Libet for example proposes that the process of initiating a voluntary
action occurs in an unconscious part of the brain, but that just
before the action is initiated, consciousness is recruited to approve
or veto the action. In the 200 milliseconds before a finger is lifted,
consciousness determines whether it moves or not.
Whatever the reasons for the delay between decision and awareness,
Libet's findings now raise the moral question: Is one to be held
responsible for decisions that are made without conscious awareness?
_________________________________________________________________

DANIEL GOLEMAN
Psychologist; Author, Emotional Intelligence
[goleman100.jpg]

Cyber-disinhibition

The Internet inadvertently undermines the quality of human
interaction, allowing destructive emotional impulses freer reign under
specific circumstances. The reason is a neural fluke that results in
cyber-disinhibition of brain systems that keep our more unruly urges
in check. The tech problem: a major disconnect between the ways our
brains are wired to connect, and the interface offered in online
interactions.

Communication via the Internet can mislead the brain's social systems.
The key mechanisms are in the prefrontal cortex; these circuits
instantaneously monitor ourselves and the other person during a live
interaction, and automatically guide our responses so they are
appropriate and smooth. A key mechanism for this involves circuits
that ordinarily inhibit impulses for actions that would be rude or
simply inappropriate -- or outright dangerous.
In order for this regulatory mechanism to operate well, we depend on
real-time, ongoing feedback from the other person. The Internet has no
means to allow such realtime feedback (other than rarely used two-way
audio/video streams). That puts our inhibitory circuitry at a loss --
there is no signal to monitor from the other person. This results in
disinhibition: impulse unleashed.

Such disinhibition seems state-specific, and typically occurs rarely
while people are in positive or neutral emotional states. That's why
the Internet works admirably for the vast majority of communication.
Rather, this disinhibition becomes far more likely when people feel
strong, negative emotions. What fails to be inhibited are the impulses
those emotions generate.

This phenomenon has been recognized since the earliest days of the
Internet (then the Arpanet, used by a small circle of scientists) as
"flaming," the tendency to send abrasive, angry or otherwise
emotionally "off" cyber-messages. The hallmark of a flame is that the
same person would never say the words in the email to the recipient
were they face-to-face. His inhibitory circuits would not allow it --
and so the interaction would go more smoothly. He might still
communicate the same core information face-to-face, but in a more
skillful manner. Offline and in life, people who flame repeatedly tend
to become friendless, or get fired (unless they already run the
company).

The greatest danger from cyber-disinhibition may be to young people.
The prefrontal inhibitory circuitry is among the last part of the
brain to become fully mature, doing so sometime in the twenties.
During adolescence there is a developmental lag, with teenagers having
fragile inhibitory capacities, but fully ripe emotional impulsivity.

Strengthening these inhibitory circuits can be seen as the singular
task in neural development of the adolescent years.

One way this teenage neural gap manifests online is "cyber-bullying,"
which has emerged among girls in their early teens. Cliques of girls
post or send cruel, harassing messages to a target girl, who typically
is both reduced to tears and socially humiliated. The posts and
messages are anonymous, though they become widely known among the
target's peers. The anonymity and social distance of the Internet
allow an escalation of such petty cruelty to levels that are rarely
found in person: face-to-face seeing someone cry typically halts
bullying among girls -- but that inhibitory signal cannot come via
Internet.

A more ominous manifestation of cyber-disinhibition can be seen in the
susceptibility of teenagers induced to perform sexual acts in front of
webcams for an anonymous adult audience who pay to watch and direct.
Apparently hundreds of teenagers have been lured into this corner of
child pornography, with an equally large audience of pedophiles. The
Internet gives strangers access to children in their own homes, who
are tempted to do things online they would never consider in person.

Cyber-bullying was reported last week in my local paper. The Webcam
teenage sex circuit was a front-page story in The New York Times two
days later.

As with any new technology, the Internet is an experiment in progress.
It's time we considered what other such downsides of
cyber-disinhibition may be emerging -- and looked for a technological
fix, if possible. The dangerous thought: the Internet may harbor
social perils our inhibitory circuitry was not designed to handle in
evolution.
_________________________________________________________________

BRIAN GREENE
Physicist & Mathematician, Columbia University; Author, The Fabric of
the Cosmos; Presenter, three-part Nova program, The Elegant Universe
[greene100.jpg]

The Multiverse

The notion that there are universes beyond our own -- the idea that we
are but one member of a vast collection of universes called the
multiverse -- is highly speculative, but both exciting and humbling.
It's also an idea that suggests a radically new, but inherently risky
approach to certain scientific problems.

An essential working assumption in the sciences is that with adequate
ingenuity, technical facility, and hard work, we can explain what we
observe. The impressive progress made over the past few hundred years
is testament to the apparent validity of this assumption. But if we
are part of a multiverse, then our universe may have properties that
are beyond traditional scientific explanation. Here's why:

Theoretical studies of the multiverse (within inflationary cosmology
and string theory, for example) suggest that the detailed properties
of the other universes may be significantly different from our own. In
some, the particles making up matter may have different masses or
electric charges; in others, the fundamental forces may differ in
strength and even number from those we experience; in others still,
the very structure of space and time may be unlike anything we've ever
seen.

In this context, the quest for fundamental explanations of particular
properties of our universe -- for example, the observed strengths of
the nuclear and electromagnetic forces -- takes on a very different
character. The strengths of these forces may vary from universe to
universe and thus it may simply be a matter of chance that, in our
universe, these forces have the particular strengths with which we're
familiar. More intriguingly, we can even imagine that in the other
universes where their strengths are different, conditions are not
hospitable to our form of life. (With different force strengths, the
processes giving rise to long-lived stars and stable planetary systems
-- on which life can form and evolve -- can easily be disrupted.) In
this setting, there would be no deep explanation for the observed
force strengths. Instead, we would find ourselves living in a universe
in which the forces have their familiar strengths simply because we
couldn't survive in any of the others where the strengths were
different.

If true, the idea of a multiverse would be a Copernican revolution
realized on a cosmic scale. It would be a rich and astounding
upheaval, but one with potentially hazardous consequences. Beyond the
inherent difficulty in assessing its validity, when should we allow
the multiverse framework to be invoked in lieu of a more traditional
scientific explanation? Had this idea surfaced a hundred years ago,
might researchers have chalked up various mysteries to how things just
happen to be in our corner of the multiverse, and not pressed on to
discover all the wondrous science of the last century?

Thankfully that's not how the history of science played itself out, at
least not in our universe. But the point is manifest. While some
mysteries may indeed reflect nothing more than the particular
universe, within the multiverse, we find ourselves inhabiting, other
mysteries are worth struggling with because they are the result of
deep, underlying physical laws. The danger, if the multiverse idea
takes root, is that researchers may too quickly give up the search for
such underlying explanations. When faced with seemingly inexplicable
observations, researchers may invoke the framework of the multiverse
prematurely -- proclaiming some or other phenomenon to merely reflect
conditions in our bubble universe -- thereby failing to discover the
deeper understanding that awaits us.
_________________________________________________________________

DAVID GELERNTER
Computer Scientist, Yale University; Chief Scientist, Mirror Worlds
Technologies; Author, Drawing Life
[gelernter100.jpg]

What are people well-informed about in the Information Age?

Let's date the Information Age to 1982, when the Internet went into
operation & the PC had just been born. What if people have been
growing less well-informed ever since? What if people have been
growing steadily more ignorant ever since the so-called Information
Age began?
Suppose an average US voter, college teacher, 5th-grade teacher,
5th-grade student are each less well-informed today than they were in
'95, and were less well-informed then than in '85? Suppose, for that
matter, they were less well-informed in '85 than in '65?
If this is indeed the "information age," what exactly are people
well-informed about? Video games? Clearly history, literature,
philosophy, scholarship in general are not our specialities. This is
some sort of technology age -- are people better informed about
science? Not that I can tell. In previous technology ages, there was
interest across the population in the era's leading technology.
In the 1960s, for example, all sorts of people were interested in the
space program and rocket technology. Lots of people learned a little
about the basics -- what a "service module" or "trans-lunar injection"
was, why a Redstone-Mercury vehicle was different from an
Atlas-Mercury -- all sorts of grade-school students, lawyers,
housewives, English profs were up on these topics. Today there is no
comparable interest in computers & the internet, and no comparable
knowledge. "TCP/IP," "Routers," "Ethernet protocol," "cache hits" --
these are topics of no interest whatsoever outside the technical
community. The contrast is striking.
_________________________________________________________________

MAHZARIN R. BANAJI
Professor of Psychology, Harvard University
[banaji100.jpg]

We do not (and to a large extent, cannot) know who we are through
introspection

Conscious awareness is a sliver of the machine that is human
intelligence but it's the only aspect we experience and hence the only
aspect we come to believe exists. Thoughts, feelings, and behavior
operate largely without deliberation or conscious recognition -- it's
the routinized, automatic, classically conditioned, pre-compiled
aspects of our thoughts and feelings that make up a large part of who
we are. We don't know what motivates us even though we are certain we
know just why we do the things we do. We have no idea that our
perceptions and judgments are incorrect (as measured objectively) even
when they are. Even more stunning, our behavior is often discrepant
from our own conscious intentions and goals, not just objective
standards or somebody else's standards.

The same lack of introspective access that keeps us from seeing the
truth in a visual illusion is the lack of introspective access that
keeps us from seeing the truth of our own minds and behavior. The
"bounds" on our ethical sense rarely come to light because the input
into those decisions is kept firmly outside our awareness. Or at
least, they don't come to light until science brings them into the
light in a way that no longer permits them to remain in the dark.

It is the fact that human minds have a tendency to categorize and
learn in particular ways, that the sorts of feelings for one's ingroup
and fear of outgroups are part of our evolutionary history. That
fearing things that are different from oneself, holding what's not
part of the dominant culture (not American, not male, not White, not
college-educated) to be "less good" whether one wants to or not,
reflects a part of our history that made sense in a particular time
and place - because without it we would not have survived. To know
this is to understand the barriers to change honestly and with
adequate preparation.

As everybody's favorite biologist Richard Dawkins said thirty years
ago:

Let us understand what our own selfish genes are up to, because we
may then at least have a chance to upset their designs, something
that no other species has ever aspired to do.

We cannot know ourselves without the methods of science. The mind
sciences have made it possible to look into the universe between the
ear drums in ways that were unimagined.

Emily Dickinson wrote in a letter to a mentor asking him to tell her
how good a poet she was: "The sailor cannot see the north, but knows
the needle can" she said. We have the needle and it involves direct,
concerted effort, using science to get to the next and perhaps last
frontier, of understanding not just our place among other planets, our
place among other species, but our very nature.
_________________________________________________________________

RODNEY BROOKS
Director, MIT Computer Science and Artificial Intelligence Laboratory
(CSAIL);  Chief Technical Officer of iRobot Corporation; author Flesh
and Machines
[brooks100.jpg]

Being alone in the universe

The thing that I worry about most that may or may not be true is that
perhaps the spontaneous transformation from non-living matter to
living matter is extraordinarily unlikely. We know that it has
happened once. But what if we gain lots of evidence over the next few
decades that it happens very rarely.

In my lifetime we can expect to examine the surface of Mars, and the
moons of the gas giants in some detail. We can also expect to be able
to image extra-solar planets within a few tens of light years to
resolutions where we would be able to detect evidence of large scale
biological activity.

What if none of these indicate any life whatsoever? What does that do
to our scientific belief that life did arise spontaneously. It should
not change it, but it will make it harder to defend against
non-scientific attacks. And wouldn't it sadden us immensely if we were
to discover that there is a vanishing small probability that life will
arise even once in any given galaxy.

Being alone in this solar system will not be such a such a shock, but
alone in the galaxy, or worse alone in the universe would, I think,
drive us to despair, and back towards religion as our salve.
_________________________________________________________________

LEE SMOLIN
Physicist, Perimeter Institute; Author, Three Roads to Quantum Gravity
[smolin100.jpg]

Seeing Darwin in the light of Einstein; seeing Einstein in the light
of Darwin

The revolutionary moves made by Einstein and Darwin are closely
related, and their combination will increasingly come to define how we
see our worlds: physical, biological and social.

Before Einstein, the properties of elementary particles were
understood as being defined against an absolute, eternally fixed
background. This way of doing science had been introduced by Newton.
His method was to posit the existence of an absolute and eternal
background structure against which the properties of things were
defined. For example, this is how Newton conceived of space and time.
Particles have properties defined, not with respect to each other, but
each with respect to only the absolute background of space and time.
Einstein's great achievement was to realize successfully the contrary
idea, called relationalism, according to which the world is a network
of relationships which evolve in time. There is no absolute background
and the properties of anything are only defined in terms of its
participation in this network of relations.

Before Darwin, species were thought of as eternal categories, defined
a priori; after Darwin species were understood to be relational
categories-that is only defined in terms of their relationship with
the network of interactions making up the biosphere. Darwin's great
contribution was to understand that there is a process-natural
selection-that can act on relational properties, leading to the birth
of genuine novelty by creating complexes of relationships that are
increasingly structured and complex.

Seeing Darwin in the light of Einstein, we understand that all the
properties a species has in modern biology are relational. There is no
absolute background in biology.

Seeing Einstein in the light of Darwin opens up the possibility that
the mechanism of natural selection could act not only on living things
but on the properties that define the different species of elementary
particles.

At first, physicists thought that the only relational properties an
elementary particle might have were its position and motion in space
and time. The other properties, like mass and charge were thought of
in the old framework: defined by a background of absolute law. The
standard model of particle physics taught us that some of those
properties, like mass, are only the consequence of a particles
interactions with other fields. As a result the mass of a particle is
determined environmentally, by the phase of the other fields it
interacts with.

I don't know which model of quantum gravity is right, but all the
leading candidates, string theory, loop quantum gravity and others,
teach us that it is possible that all properties of elementary
particles are relational and environmental. In different possible
universes there may be different combinations of elementary particles
and forces. Indeed, all that used to be thought of as fundamental,
space and the  elementary particles themselves are increasingly seen,
in models of quantum gravity, as themselves emergent from a more
elementary network of relations.

The basic method of science after Einstein seems to be: identify
something in your theory that is playing the role of an absolute
background, that is needed to define the laws that govern objects in
your theory, and understand it more deeply as a contingent property,
which itself evolves subject to law.

For example, before Einstein the geometry of space was thought of as
specified absolutely as part of the laws of nature. After Einstein we
understand geometry is contingent and dynamical, which means it
evolves subject to law. This means that Einstein's move can even be
applied to aspects of what were thought to be the laws of nature: so
that even aspects of the laws turn out to evolve in time.

The basic method of science after Darwin seems to be to identify some
property once thought to be absolute and defined a prior and recognize
that it can be understood because it has evolved by a process of or
akin to natural selection. This has revolutionized biology and is in
the process of doing the same to the social sciences.

We can see by how I have stated it that these two methods are closely
related. Einstein emphasizes the relational aspect of all properties
described by science, while Darwin proposes that ultimately, the law
which governs the evolution of everything else, including perhaps what
were once seen to be laws-is natural selection.

Should Darwin's method be applied even to the laws of physics?
Recent developments in elementary particle physics give us little
alternative if we are to have a rational understanding of the laws
that govern our universe. I am referring here to the realization that
string theory gives us, not a unique set of particles and forces, but
an infinite list out of which one came to be selected for our
universe. We physicists have now to understand Darwin's lesson:  the
only way to understand how one out of a vast number of choices was
made, which favors improbably structure, is that it is the result of
evolution by natural selection.

Can this work? I showed it might, in 1992, in a theory of cosmological
natural selection. This remains the only theory of how our laws came
to be selected so far proposed that makes falsifiable predictions.

The idea that laws of nature are themselves the result of evolution by
natural selection is nothing new, it was anticipated by the
philosopher Charles Sanders Pierce, who wrote in 1891:

To suppose universal laws of nature capable of being apprehended by
the mind and yet having no reason for their special forms, but
standing inexplicable and irrational, is hardly a justifiable
position. Uniformities are precisely the sort of facts that need to
be accounted for. Law is par excellence the thing that wants a
reason. Now the only possible way of accounting for the laws of
nature, and for uniformity in general, is to suppose them results
of evolution.

This idea remains dangerous, not only for what it has achieved, but
for what it implies for the future. For there are implications have
yet to be absorbed or understood, even by those who have come to
believe it is the only way forward for science. For example, must
there always be a deeper, or meta-law, which governs the physical
mechanisms by which a law evolves?  And what about the fact that laws
of physics are expressed in mathematics, which is usually thought of
as encoding eternal truths?  Can mathematics itself come to be seen as
time bound rather that as transcendent and eternal platonic truths?

I believe that we will achieve clarity on these and other scary
implications of the idea that all the regularities we observe,
including those we have gotten used to calling laws, are the result of
evolution by natural selection. And I believe that once this is
achieved Einstein and Darwin will be understood as partners in the
greatest revolution yet in science, a revolution that taught us that
the world we are imbedded in is nothing but an ever evolving network
of relationships.
_________________________________________________________________

ALISON GOPNIK
Psychologist, UC-Berkeley; Coauthor, The Scientist In the Crib
[gopnik100.jpg]

A cacophony of "controversy"

It may not be good to encourage scientists to articulate dangerous
ideas.

Good scientists, almost by definition, tend towards the contrarian and
ornery, and nothing gives them more pleasure than holding to an
unconventional idea in the face of opposition. Indeed, orneriness and
contrarianism are something of currency for science -- nobody wants to
have an idea that everyone else has too. Scientists are always
constructing a straw man "establishment" opponent who they can then
fearlessly demolish. If you combine that with defying the conventional
wisdom of non-scientists you have a recipe for a very distinctive kind
of scientific smugness and self-righteousness. We scientists see this
contrarian habit grinning back at us in a particularly hideous and
distorted form when global warming opponents or intelligent design
advocates invoke the unpopularity of their ideas as evidence that they
should be accepted, or at least discussed.

The problem is exacerbated for public intellectuals. For the media
too, would far rather hear about contrarian or unpopular or morally
dubious or "controversial" ideas than ones that are congruent with
everyday morality and wisdom. No one writes a newspaper article about
a study that shows that girls are just as good at some task as boys,
or that children are influenced by their parents.

It is certainly true that there is no reason that scientifically valid
results should have morally comforting consequences -- but there is no
reason why they shouldn't either. Unpopularity or shock is no more a
sign of truth than popularity is. More to the point, when scientists
do have ideas that are potentially morally dangerous they should
approach those ideas with hesitancy and humility. And they should do
so in full recognition of the great human tragedy that, as Isiah
Berlin pointed out, there can be genuinely conflicting goods and that
humans are often in situations of conflict for which there is no
simple or obvious answer.

Truth and morality may indeed in some cases be competing values, but
that is a tragedy, not a cause for self-congratulation. Humility and
empathy come less easily to most scientists, most certainly including
me, than pride and self-confidence, but perhaps for that very reason
they are the virtues we should pursue.

This is, of course, itself a dangerous idea. Orneriness and
contrarianism are in fact, genuine scientific virtues, too. And in the
current profoundly anti-scientific political climate it is terribly
dangerous to do anything that might give comfort to the enemies of
science. But I think the peril to science actually doesn't lie in
timidity or self-censorship. It is much more likely to lie in a
cacophony of "controversy".
_________________________________________________________________

KEVIN KELLY
Editor-At-Large, Wired; Author, New Rules for the New Economy
[kelly100.jpg]
More anonymity is good

More anonymity is good: that's a dangerous idea.

Fancy algorithms and cool technology make true anonymity in mediated
environments more possible today than ever before. At the same time
this techno-combo makes true anonymity in physical life much harder.
For every step that masks us, we move two steps toward totally
transparent unmasking. We have caller ID, but also caller ID Block,
and then caller ID-only filters. Coming up: biometric monitoring and
little place to hide. A world where everything about a person can be
found and archived is a world with no privacy, and therefore many
technologists are eager to maintain the option of easy anonymity as a
refuge for the private.

However in every system that I have seen where anonymity becomes
common, the system fails. The recent taint in the honor of Wikipedia
stems from the extreme ease which anonymous declarations can be put
into a very visible public record. Communities infected with anonymity
will either collapse, or shift the anonymous to pseudo-anonymous, as
in eBay, where you have a traceable identity behind an invented
nickname. Or voting, where you can authenticate an identity without
tagging it to a vote.

Anonymity is like a rare earth metal. These elements are a necessary
ingredient in keeping a cell alive, but the amount needed is a mere
hard-to-measure trace. In larger does these heavy metals are some of
the most toxic substances known to a life. They kill. Anonymity is the
same. As a trace element in vanishingly small doses, it's good for the
system by enabling the occasional whistleblower, or persecuted fringe.
But if anonymity is present in any significant quantity, it will
poison the system.

There's a dangerous idea circulating that the option of anonymity
should always be at hand, and that it is a noble antidote to
technologies of control. This is like pumping up the levels of heavy
metals in your body into to make it stronger.

Privacy can only be won by trust, and trust requires persistent
identity, if only pseudo-anonymously. In the end, the more trust, the
better. Like all toxins, anonymity should be keep as close to zero as
possible.
_________________________________________________________________

DENIS DUTTON
Professor of the philosophy of art, University of Canterbury, New
Zealand, editor of Philosophy and Literature and Arts & Letters Daily
[dutton100.jpg]

A "grand narrative"

The humanities have gone through the rise of Theory in the 1960s, its
firm hold on English and literature departments through the 1970s and
80s, followed most recently by its much-touted decline and death.

Of course, Theory  (capitalization is an English department
affectation) never operated as a proper research program in any
scientific sense -- with hypotheses validated (or falsified) by
experiment or accrued evidence. Theory was a series of intellectual
fashion statements, clever slogans and postures, imported from France
in the 60s, then developed out of Yale and other Theory hot spots.
The academic work Theory spawned was noted more for  its chosen
jargons, which functioned like secret codes, than for any concern to
establish truth or advance knowledge. It was all about careers and
prestige.

Truth and knowledge, in fact, were ruled out as quaint illusions.
This cleared the way, naturally, for an "anything-goes" atmosphere of
academic criticism. In reality, it was anything but anything goes,
since the political demands of the period included a long list of
stereotyped villains (the West, the Enlightenment, dead whites males,
even clear writing) to be pitted against mandatory heroines and heroes
(indigenous peoples, the working class, the oppressed, and so forth).

Though the politics remains as strong as ever in academe, Theory has
atrophied not because it was refuted, but because everyone got bored
with it.  Add to that the absurdly bad writing of academic humanists
of the period and episodes like the Sokal Hoax, and the decline was
inevitable.  Theory academics could with high seriousness ignore
rational counter-arguments, but for them ridicule and laughter were
like water thrown at the Wicked Witch.  Theory withered and died.

But wait. Here is exactly where my most dangerous idea comes in. What
if it turned out that the academic humanities -- art criticism, music
and literary history, aesthetic theory, and the philosophy of art --
actually had available to them a true, and therefore permanently
valuable, theory to organize their speculations and interpretations?
What if there really existed a hitherto unrecognized "grand narrative"
that could explain the entire history of creation and experience of
the arts worldwide?

Aesthetic experience, as well as the context of artistic creation, is
a phenomenon both social and psychological. From the standpoint of
inner experience, it can be addressed by evolutionary psychology: the
idea that our thinking and values are conditioned by the 2.6 million
years of natural and sexual selection in the Pleistocene.

This Darwinian theory has much to say about the abiding,
cross-culturally ascertainable values human beings find in art. The
fascination, for example, that people worldwide find in the exercise
of artistic virtuosity, from Praxiteles to Hokusai to Renee Fleming,
is not a social construct, but a Pleistocene adaptation (which outside
of the arts shows itself in sporting interests everywhere).  That
calendar landscapes worldwide feature alternating copses of trees and
open spaces, often hilly land, water, and paths or river banks that
wind into an inviting distance is a Pleistocene landscape preference
(which shows up in both art history and in the design of public parks
everywhere).  That soap operas and Greek tragedy all present themes of
family breakdown ("She killed him because she loved him") is a
reflection of ancient, innate content interests in story-telling.

Darwinian theory offers substantial answers to perennial aesthetic
questions. It has much to say about the origins of art. It's unlikely
that the arts came about at one time or for one purpose; they evolved
from overlapping interests based in survival and mate selection in the
80,000 generations of the Pleistocene. How we scan visually, how we
hear, our sense of rhythm, the pleasures of artistic expression and in
joining with others as an audience, and, not least, how the arts
excite us using a repertoire of universal human emotions: all of this
and more will be illuminated and explained by a Darwinian aesthetics.

I've encountered stiff academic resistance to the notion that
Darwinian theory might greatly improve the understanding of our
aesthetic and imaginative lives.  There's no reason to worry.  The
most complete, evolutionarily-based explanation of a great work of
art, classic or recent, will address its form, its narrative content,
its ideology, how it is taken in by the eye or mind, and indeed, how
it can produce a deep, even life-transforming pleasure.  But nothing
in a valid aesthetic psychology will rob art of its appeal, any more
than knowing how we evolved to enjoy fat and sweet makes a piece of
cheesecake any less delicious. Nor will a Darwinian aesthetics reduce
the complexity of art to simple formulae.  It will only give us a
better understanding of the greatest human achievements and their
effects on us.

In the sense that it would show innumerable careers in the humanities
over the last forty years to have been wasted on banal politics and
execrable criticism, Darwinian aesthetics is a very dangerous idea
indeed.  For people who really care about understanding art, it would
be a combination of fresh air and strong coffee.
_________________________________________________________________

SIMON BARON-COHEN
Psychologist, Autism Research Centre, Cambridge University; Author,
The Essential Difference
[baroncohen100.jpg]

A political system based on empathy

Imagine a political system based not on legal rules (systemizing) but
on empathy. Would this make the world a safer place?

The UK Parliament, US Congress, Israeli Knesset, French National
Assembly, Italian Senato della Repubblica, Spanish Congreso de los
Diputados, -- what do such political chambers have in common? Existing
political systems are based on two principles: getting power through
combat, and then creating/revising laws and rules through combat.

Combat is sometimes physical (toppling your opponent militarily),
sometimes economic (establishing a trade embargo, to starve your
opponent of resources), sometimes propaganda-based (waging a media
campaign to discredit your opponent's reputation), and sometimes
through voting-related activity (lobbying, forming alliances, fighting
to win votes in key seats), with the aim to 'defeat' the opposition.

Creating/revising laws and rules is what you do once you are in power.
These might be constitutional rules, rules of precedence, judicial
rulings, statutes, or other laws or codes of practice. Politicians
battle for their rule-based proposal (which they hold to be best) to
win, and battle to defeat the opposition's rival proposal.

This way of doing politics is based on "systemizing". First you
analyse the most effective form of combat (itself a system) to win. If
we do x, then we will obtain outcome y. Then you adjust the legal code
(another system). If we pass law A, we will obtain outcome B.

My colleagues and I have studied the essential difference between how
men and women think. Our studies suggest that (on average) more men
are systemizers, and more women are empathizers. Since most political
systems were set up by men, it may be no coincidence that we have
ended up with political chambers that are built on the principles of
systemizing.

So here's the dangerous new idea. What would it be like if our
political chambers were based on the principles of empathizing? It is
dangerous because it would mean a revolution in how we choose our
politicians, how our political chambers govern, and how our
politicians think and behave. We have never given such an alternative
political process a chance. Might it be better and safer than what we
currently have? Since empathy is about keeping in mind the thoughts
and feelings of other people (not just your own), and being sensitive
to another person's thoughts and feelings (not just riding rough-shod
over them), it is clearly incompatible with notions of "doing battle
with the opposition" and "defeating the opposition" in order to win
and hold on to power.

Currently, we select a party (and ultimately a national) leader based
on their "leadership" qualities. Can he or she make decisions
decisively? Can they do what is in the best interests of the party, or
the country, even if it means sacrificing others to follow through on
a decision? Can they ruthlessly reshuffle their Cabinet and "cut
people loose" if they are no longer serving their interests? These are
the qualities of a strong systemizer.

Note we are not talking about whether that politician is male or
female. We are talking about how a politician (irrespective of their
sex) thinks and behaves.

We have had endless examples of systemizing politicians unable to
resolve conflict. Empathizing politicians would perhaps follow Mandela
and De Klerk's examples, who sat down to try to understand the other,
to empathize with the other, even if the other was defined as a
terrorist. To do this involves the empathic act of stepping into the
other's shoes, and identifying with their feelings.

The details of a political system based on empathizing would need a
lot of working out, but we can imagine certain qualities that would
have no place.

Gone would be politicians who are skilled orators but who simply
deliver monologues, standing on a platform, pointing forcefully into
the air to underline their insistence -- even the body language
containing an implied threat of poking their listener in the chest or
the face - to win over an audience. Gone too would be politicians who
are so principled that they are rigid and uncompromising.

Instead, we would elect politicians based on different qualities:
politicians who are good listeners, who ask questions of others
instead of assuming they know the right course of action. We would
instead have politicians who respond sensitively to another, different
point of view, and who can be flexible over where the dialogue might
lead. Instead of seeking to control and dominate, our politicians
would be seeking to support, enable, and care.
_________________________________________________________________

FREEMAN DYSON
Physicist, Institute of Advanced Study, Author, Disturbing the
Universe
[dysonf100.jpg]
Biotechnology will be thoroughly domesticated in the next fifty years

Biotechnology will be domesticated in the next fifty years as
thoroughly as computer technology was in the last fifty years.

This means cheap and user-friendly tools and do-it-yourself kits, for
gardeners to design their own roses and orchids, and for
animal-breeders to design their own lizards and snakes. A new art-form
as creative as painting or cinema. It means biotech games for children
down to kindergarten age, like computer-games but played with real
eggs and seeds instead of with images on a screen. Kids will grow up
with an intimate feeling for the organisms that they create. It means
an explosion of biodiversity as new ecologies are designed to fit into
millions of local niches all over the world. Urban and rural
landscapes will become more varied and more fertile.

There are two severe and obvious dangers. First, smart kids and
malicious grown-ups will find ways to convert biotech tools to the
manufacture of lethal microbes. Second, ambitious parents will find
ways to apply biotech tools to the genetic modification of their own
babies. The great unanswered question is, whether we can regulate
domesticated biotechnology so that it can be applied freely to animals
and vegetables but not to microbes and humans.
_________________________________________________________________

GREGORY COCHRAN
Consultant in adaptive optics and an adjunct professor of anthropology
at the University of Utah
[cochran100.jpg]

There is something new under the sun -- us
Thucydides said that human nature was unchanging and thus predictable
-- but he was probably wrong.  If you consider natural selection
operating in fast-changing human environments, such stasis is most
unlikely. We know of a number of cases in which there has been rapid
adaptive change in humans; for example, most of the malaria-defense
mutations such as sickle cell are recent, just a few thousand years
old.  The lactase mutation that lets most adult Europeans digest ice
cream is not much older.
There is no magic principle that restricts human evolutionary change
to disease defenses and dietary adaptations: everything is up for
grabs.  Genes affecting personality, reproductive strategies,
cognition, are all able to change significantly over few-millennia
time scales if the environment favors such change -- and this includes
the new environments we have made for ourselves, things like new ways
of making a living and new social structures.  I would be astonished
if the mix of personality types favored among hunter-gatherers is
"exactly" the same as that favored among peasant farmers ruled by a
Pharaoh.  In fact they might be fairly different.
There is evidence that such change has occurred. Henry Harpending and
I have, we think, made a strong case that natural selection changed
the Ashkenazi Jews over a thousand years or so, favoring certain kinds
of cognitive abilities and generating genetic diseases as a side
effect.  Bruce Lahn's team has found new variants of brain-development
genes: one, ASPM, appears to have risen to high frequency in Europe
and the Middle East in about six thousand years.  We don't yet know
what this new variant does, but it certainly could affect the human
psyche -- and if it does, Thucydides was wrong.  We may not be doomed
to repeat the Sicilian expedition: on the other hand, since we don't
understand much yet about the changes that have occurred, we might be
even more doomed.  But at any rate, we have almost certainly changed.
There is something new under the sun -- us.

This concept opens strange doors.  If true, it means that the people
of Sumeria and Egypt's Old Kingdom were probably fundamentally
different from us: human nature has changed -- some, anyhow -- over
recorded history. Julian Jaynes, in The Origin of Consciousness in the
Breakdown of the Bicameral Mind, argued that there was something
qualitatively different about the human mind in ancient civilization.
On first reading, Breakdown seemed one of the craziest books ever
written, but Jaynes may have been on to something.

If people a few thousand years ago thought and acted differently
because of biological differences, history is never going to be the
same.
_________________________________________________________________

GEORGE B. DYSON
Science Historian; Author, Project Orion
[dysong100.jpg]

Understanding molecular biology without discovering the origins of
life

I predict we will reach a complete understanding of molecular biology
and molecular evolution, without ever discovering the origins of life.

This idea is dangerous, because it suggests a mystery that science
cannot explain. Or, it may be interpreted as confirmation that life is
merely the collective result of a long series of incremental steps,
and that it is impossible to draw a precise distinction between life
and non-life.

"The only thing of which I am sure," argued Samuel Butler in 1880, "is
that the distinction between the organic and inorganic is arbitrary;
that it is more coherent with our other ideas, and therefore more
acceptable, to start with every molecule as a living thing, and then
deduce death as the breaking up of an association or corporation, than
to start with inanimate molecules and smuggle life into them. "

Every molecule a living thing? That's not even dangerous, it's wrong!
But where else can you draw the line?
_________________________________________________________________

KEITH DEVLIN
Mathematician; Executive Director, Center for the Study of Language
and Information, Stanford; Author, The Millennium Problems
[devlin100.jpg]

We are entirely alone
Living creatures capable of relecting on their own existence are a
one-off, freak accident, existing for one brief moment in the history
of the universe. There may be life elsewhere in the universe, but it
does not have self-reflective consciousness. There is no God; no
Intelligent Designer; no higher purpose to our lives.

Personally, I have never found this possibility particularly
troubling, but my experience has been that most people go to
considerable lengths to convince themselves that it is otherwise.

I think that many people find the suggestion dangerous because they
see it as leading to a life devoid of meaning or moral values. They
see it as a suggestion full of despair, an idea that makes our lives
seem pointless. I believe that the opposite is the case. As the
product of that unique, freak accident, finding ourselves able to
reflect on and enjoy our conscious existence, the very unlikeliness
and uniqueness of our situation surely makes us highly appreciative of
what we have.

Life is not just important to us; it is literally everything we have.
That makes it, in human terms, the most precious thing there is. That
not only gives life meaning for us, something to be respected and
revered, but a strong moral code follows automatically.

The fact that our existence has no purpose outside that existence is
completely irrelevant to the way we live our lives, since we are
inside our existence. The fact that our existence has no purpose for
the universe -- whatever that means -- in no way means it has no
purpose for us. We must ask and answer questions about ourselves
within the framework of our existence as what we are.
_________________________________________________________________

FRANK TIPLER
Professor of Mathematical Physics, Tulane University; Author, The
Physics of Immortality
[tipler100.jpg]

Why I Hope the Standard Model is Wrong about Why There is More Matter
Than Antimatter

The Standard Model of particle physics -- a theory of all forces and
particles except gravity and a theory that has survived all tests over
the past thirty years -- says it is possible to convert matter
entirely into energy. Old-fashioned nuclear physics allows some matter
to be converted into energy, but because nuclear physics requires the
number of heavy particles like neutrons and protons, and light
particles like electrons, to be separately conserved in nuclear
reactions, only a small fraction (less than 1%) of the mass of the
uranium or plutonium in an atomic bomb can be converted into energy.
The Standard Model says that there is a way to convert all the mass of
ordinary matter into energy; for example, it is in principle possible
to convert the proton and electron making up a hydrogen atom entirely
into energy. Particle physicists have long known about this
possibility, but have considered it forever irrelevant to human
technology because the energy required to convert matter into pure
energy via this process is at the very limit of our most powerful
accelerators (a trillion electron volts, or one TeV).

I am very much afraid that the particle physicists are wrong about
this Standard Model pure energy conversion process being forever
irrelevant to human affairs. I have recently come to believe that the
consistency of quantum field theory requires that it should be
possible to convert up to 100 kilograms of ordinary matter into pure
energy via this process using a device that could fit inside the trunk
of a car, a device that could be manufactured in a small factory. Such
a device would solve all our energy problems -- we would not need
fossil fuels -- but 100 kilograms of energy is the energy released by
a 1,000-megaton nuclear bomb. If such a bomb can be manufactured in a
small factory, then terrorists everywhere will eventually have such
weapons. I fear for the human race if this comes to pass. I very hope
I am wrong about the technological feasibility of such a bomb.
_________________________________________________________________

SCOTT SAMPSON
Chief Curator, Utah Museum of Natural History; Associate Professor
Department of Geology and Geophysics, University of Utah; Host,
Dinosaur Planet TV series
[sampson100.jpg]
The purpose of life is to disperse energy

The truly dangerous ideas in science tend to be those that threaten
the collective ego of humanity and knock us further off our pedestal
of centrality. The Copernican Revolution abruptly dislodged humans
from the center of the universe. The Darwinian Revolution yanked Homo
sapiens from the pinnacle of life. Today another menacing revolution
sits at the horizon of knowledge, patiently awaiting broad realization
by the same egotistical species.

The dangerous idea is this: the purpose of life is to disperse energy.

Many of us are at least somewhat familiar with the second law of
thermodynamics, the unwavering propensity of energy to disperse and,
in doing so, transition from high quality to low quality forms. More
generally, as stated by ecologist Eric Schneider, "nature abhors a
gradient," where a gradient is simply a difference over a distance --
for example, in temperature or pressure. Open physical systems --
including those of the atmosphere, hydrosphere, and geosphere -- all
embody this law, being driven by the dispersal of energy, particularly
the flow of heat, continually attempting to achieve equilibrium.
Phenomena as diverse as lithospheric plate motions, the northward flow
of the Gulf Stream, and occurrence of deadly hurricanes are all
examples of second law manifestations.

There is growing evidence that life, the biosphere, is no different.
It has often been said the life's complexity contravenes the second
law, indicating the work either of a deity or some unknown natural
process, depending on one's bias. Yet the evolution of life and the
dynamics of ecosystems obey the second law mandate, functioning in
large part to dissipate energy. They do so not by burning brightly and
disappearing, like a fire torching a forest, but through stable
metabolic cycles that store chemical energy and continually reduce the
solar gradient. Photosynthetic plants, bacteria, and algae capture
energy from the sun and form the core of all food webs.

Virtually all organisms, including humans, are, in a real sense,
sunlight transmogrified, temporary waypoints in the flow of energy.
Ecological succession, viewed from a thermodynamic perspective, is a
process that maximizes the capture and degradation of energy.
Similarly, the tendency for life to become more complex over the past
3.5 billion years (as well as the overall increase in biomass and
organismal diversity through time) is not due simply to natural
selection, as most evolutionists still argue, but also to nature's
"efforts" to grab more and more of the sun's flow. The slow burn that
characterizes life enables ecological systems to persist over deep
time, changing in response to external and internal perturbations.

Ecology has been summarized by the pithy statement, "energy flows,
matter cycles. " Yet this maxim applies equally to complex systems in
the non-living world; indeed it literally unites the biosphere with
the physical realm. More and more, it appears that complex, cycling,
swirling systems of matter have a natural tendency to emerge in the
face of energy gradients. This recurrent phenomenon may even have been
the driving force behind life's origins.

This idea is not new, and is certainly not mine. Nobel laureate Erwin
Schrödinger was one of the first to articulate the hypothesis, as part
of his famous "What is Life" lectures in Dublin in 1943. More
recently, Jeffrey Wicken, Harold Morowitz, Eric Schneider and others
have taken this concept considerably further, buoyed by results from a
range of studies, particularly within ecology. Schneider and Dorian
Sagan provide an excellent summary of this hypothesis in their recent
book, "Into the Cool".

The concept of life as energy flow, once fully digested, is profound.
Just as Darwin fundamentally connected humans to the non-human world,
a thermodynamic perspective connects life inextricably to the
non-living world. This dangerous idea, once broadly distributed and
understood, is likely to provoke reaction from many sectors, including
religion and science. The wondrous diversity and complexity of life
through time, far from being the product of intelligent design, is a
natural phenomenon intimately linked to the physical realm of energy
flow.

Moreover, evolution is not driven by the machinations of selfish genes
propagating themselves through countless millennia. Rather, ecology
and evolution together operate as a highly successful, extremely
persistent means of reducing the gradient generated by our nearest
star. In my view, evolutionary theory (the process, not the fact of
evolution!) and biology generally are headed for a major overhaul once
investigators fully comprehend the notion that the complex systems of
earth, air, water, and life are not only interconnected, but
interdependent, cycling matter in order to maintain the flow of
energy.

Although this statement addresses only naturalistic function and is
mute with regard to spiritual meaning, it is likely to have deep
effects outside of science. In particular, broad understanding of
life's role in dispersing energy has great potential to help humans
reconnect both to nature and to planet's physical systems at a key
moment in our species' history.
_________________________________________________________________

JEREMY BERNSTEIN
Professor of Physics, Stevens Institute of Technology; Author,
Hitler's Uranium Club

The idea that we understand plutonium

The most dangerous idea I have come across recently is the idea that
we understand plutonium. Plutonium is the most complex element in the
periodic table. It has six different crystal phases between room
temperature and its melting point. It can catch fire spontaneously in
the presence of water vapor and if you inhale minuscule amounts you
will die of lung cancer. It is the principle element in the "pits"
that are the explosive cores of nuclear weapons. In these pits it is
alloyed with gallium. No one knows why this works and no one can be
sure how stable this alloy is. These pits, in the thousands, are now
decades old. What is dangerous is the idea that they have retained
their integrity and can be safely stored into the indefinite future.
_________________________________________________________________

MIHALYI CSIKSZENTMIHALYI
Psychologist; Director, Quality of Life Research Center, Claremont
Graduate University; Author, Flow
[csik100.jpg]
The free market

Generally ideas are thought to be dangerous when they threaten an
entrenched authority. Galileo was sued not because he claimed that the
earth revolved around the sun -- a "hypothesis" his chief prosecutor,
Cardinal Bellarmine, apparently was quite willing to entertain in
private -- but because the Church could not afford a fact it claimed
to know be reversed by another epistemology, in this case by the
scientific method. Similar conflicts arose when Darwin's view of how
humans first appeared on the planet challenged religious accounts of
creation, or when Mendelian genetics applied to the growth of hardier
strains of wheat challenged Leninist doctrine as interpreted by
Lysenko.
One of the most dangerous ideas at large in the current culture is
that the "free market" is the ultimate arbiter of political decisions,
and that there is an "invisible hand" that will direct us to the most
desirable future provided the free market is allowed to actualize
itself. This mystical faith is based on some reasonable empirical
foundations, but when embraced as a final solution to the ills of
humankind, it risks destroying both the material resources, and the
cultural achievements that our species has so painstakingly developed.
So the dangerous idea on which our culture is based is that the
political economy has a silver bullet -- the free market -- that must
take precedence over any other value, and thereby lead to peace and
prosperity. It is dangerous because like all silver bullets it is an
intellectual and political scam that might benefit some, but
ultimately requires the majority to pay for the destruction it causes.
My dangerous idea is dangerous only to those who support the hegemony
of the market. It consists in pointing out that the imperial free
market wears no clothes -- it does not exist in the first place, and
what passes for it is dangerous to the future well being of our
species. Scientist need to turn their attention to what the complex
system that is human life, will require in the future.

Beginnings like the Calvert-Henderson Quality of Life Indicators,
which focus on such central requirements as health, education,
infrastructure, environment, human rights, and public safety, need to
become part of our social and political agenda. And when their
findings come into conflict with the agenda of the prophets of the
free market, the conflict should be examined -- who is it that
benefits from the erosion of the quality of life?
_________________________________________________________________

IRENE PEPPERBERG
Research Associate, Psychology, Harvard University; Author, The Alex
Studies
[pepperberg100.jpg]
The differences between humans and nonhumans are quantitative, not
qualitative

I believe that the differences between humans and nonhumans are
quantitative, not qualitative.

Why is this idea dangerous? It is hardly surprising, coming from
someone who has spent her scientific career studying the abilities of
(supposedly) small-brained nonhumans; moreover, the idea is not
exactly new. It may be a bit controversial, given that many of my
colleagues spend much of their time searching for the defining
difference that separates humans and nonhumans (and they may be
correct), and also given a current social and political climate that
challenges evolution on what seems to be a daily basis. But why
dangerous? Because, if we take this idea to its logical conclusion, it
challenges almost every aspect of our lives -- scientific and
nonscientific alike.

Scientifically, the idea challenges the views of many researchers who
continue to hypothesize about the next human-nonhuman 'great
divide'...Interestingly, however, detailed observation and careful
experimentation have repeatedly demonstrated that nonhumans often
possess capacities once thought to separate them from humans. Humans,
for example, are not the only tool-using species, nor the only
tool-making species, nor the only species to act cooperatively.

So one has to wonder to what degree nonhumans share other capacities
still thought to be exclusively human. And, of course, the critical
words here are "to what degree" -- do we count lack of a particular
behavior a defining criterion, or do we accept the existence of less
complex versions of that behavior as evidence for a continuum? If one
wishes to argue that I'm just blurring the difference between
"qualitative" and "quantitative", so be it...such blurring will not
affect the dangerousness of my idea.

My idea is dangerous because it challenges scientists at a more basic
level, that of how we perform research. Now, let me state clearly that
I'm not against animal research -- I wouldn't be alive today without
it, and I work daily with captive animals that, although domestically
bred (and that, by any standard, are provided with a fairly cushy
existence), are still essentially wild creatures denied their freedom.

But if we believe in a continuum, then we must at least question our
right to perform experiments on our fellow creatures; we need to think
about how to limit animal experiments and testing to what is
essential, and to insist on humane (note the term!) housing and
treatment. And, importantly, we must accept the significant cost in
time, effort, and money thereby incurred -- increases that must come
at the expense of something else in our society.

The idea, taken to its logical conclusion, is dangerous because it
should also affect our choices as to the origins of the clothes we
wear and the foods we eat. Again, I'm not campaigning against leather
shoes and T-bone steaks; I find that I personally cannot remain
healthy on a totally vegetarian diet and sheepskin boots definitely
ease the rigors of a Massachusetts winter.

But if we believe in a continuum, we must at least question our right
to use fellow creatures for our sustenance: We need to become aware
of, for example, the conditions under which creatures destined for the
slaughterhouse live their lives, and learn about and ameliorate the
conditions in which their lives are ended. And, again, we must accept
the costs involved in such decisions.

If we do not believe in a clear boundary between humans and nonhumans,
if we do not accept a clear "them" versus "us", we need to rethink
other aspects of our lives. Do we have the right to clear-cut forests
in which our fellow creatures live? To pollute the air, soil and water
that we share with them, solely for our own benefit? Where do we draw
the line? Life may be much simpler if we do firmly draw a line, but is
simplicity a valid rationale?

And, in case anyone wonders at my own personal view: I believe that
humans are the ultimate generalists, creatures that may lack specific
talents or physical adaptations that have been finely honed in other
species, but whose additional brain power enables them -- in an
exquisite manner -- to, for example, integrate information, improvise
with what is present, and alter or adapt to a wide range of
environments...but that this additional brain power is (and provides)
a quantitative, not qualitative difference.
_________________________________________________________________

BRIAN GOODWIN
Biologist, Schumacher College, Devon, UK; Author, How The Leopard
Changed Its Spots
[goodwin100.jpg]
Fields of Danger

In science, the concept of a field is used to describe patterns of
order in systems that are extended in space and show regularities of
behaviour in time. They have always expressed ideas that are rather
mysterious, but work in describing natural processes. The first
example of a field principle in physics was Newton's celebrated
gravitational law, which described mathematically the universal
attraction between bodies with mass.

This mysterious action at a distance without any wires or mechanical
attachments between the bodies was regarded as a mystical, occult
concept by the mechanical philosophers of the 17th and 18th centuries.
They condemned Newton's idea as a violation of the principles of
explanation in the new science. However, there is a healthy pragmatic
element to scientific investigation, and Newton's equations worked too
well to be discarded on philosophical grounds.

Another celebrated example of a physical field came from the
experimental work of Michael Faraday on electricity and magnetism in
the 19th century. He talked about fields of force that extend out in
space from electrically charged bodies, or from magnets. Faraday's
painstaking and ingenious work described how these fields change with
distance from the body in precise ways, as does the gravitational
force. Again these forces were regarded as mysterious since they
travel through apparently empty space, exerting interaction at a
distance that cannot be understood mechanically.

However, so precise were Faraday's measurements of the properties of
electric and magnetic fields, and so vivid his description of the
fields of force associated with them, that James Clerk Maxwell could
take his observations and put them directly into mathematical form.
These are the famous wave equations of electromagnetism on which our
technology for electric motors, lighting, TV, communications and
innumerable other applications is based.

In the 20th century with Einstein transformed Newton's mysterious
gravitational force into an even more mysterious property of space
itself: it bends or curves under the influence of bodies with mass.
Einstein's relativity theory did away with a force of attraction
between bodies and substituted a mathematical relationship between
mass and curvature of space-time.

The result was a whole new way of understanding motion as natural,
curved paths followed by bodies that not only cause the curvature but
follow it. The universe was becoming intrinsically self-organising and
subjects as observers made an entry into physics.

As if Einstein's relativity wasn't enough to shake up the world known
to science, the next revolution was even more disturbing. Quantum
mechanics, emerging in the 1920s, did away with the classical notions
of fields as smooth distributions of forces through space-time and
described interactions at a distance in terms of discrete little
packets of energy that travel through the void in oscillating patterns
described by wave functions, of which the solutions to Schrödinger's
wave equation are the best known.

Now we have not only action at a distance but something infinitely
more disturbing: these interactions violate conventional notions of
causality because they are non-local. Two particles that have been
joined in an intimate relationship within an atom remain coherently
correlated with one another in their properties no matter how far
apart they may be after emission from the atom. Einstein could not
bring himself to believe that this 'spooky' implication of quantum
mechanics could possibly be real.

The implied entanglement means that there is a holistic principle of
connectedness in operation at the most elementary level of physical
reality. Quantum fields have subverted our basic notions of causality
and substituted a principle of wholeness in relationship for
elementary particles.

The idea that I have pursued in biology for much of my career is the
concept that goes under the name of a morphogenetic field. This term
is used to describe the processes in space and time that organise and
coordinate the various activities involved in the emergence of a whole
complex organism from a single cell, or from a group of cells in
interaction with each another.

A human embryo developing in the mother's womb from a single
fertilised egg, emerging at birth as a baby with all its organs
coherently arranged in a functioning body, is one of the most
breathtaking phenomena in nature. However, all species share the same
ability to produce new individuals of the same kind in their processes
of reproduction.

The remarkable organising principles that underlie such basic
properties of life have been known as morphogenetic fields (fields
that generate form) throughout the 20th century, though this concept
produces unease and discomfort among many biologists.This unease
arises for good reason. As in physics, the field concept is subversive
of mechanical explanations in science, and biology holds firmly to
understanding life in terms of mechanisms organised by genes.

However, the complete reading of the book of life in DNA, the major
project in biology during the last two decades of the 20th century,
did not reveal the secrets of the organism. It was a remarkable
achievement to work out the sequence of letters in the genomes of
different species, human, other animals, plants, and microbes, so that
many of the words of the genetic text of different species could be
deciphered.

Unfortunately, we were unable to make coherent sense of these words,
to put them together in the way that organisms do in creating
themselves during their reproduction as they develop into beings with
specific morphologies and behaviours, the process of morphogenesis.
What had been forgotten, or ignored, was that information only makes
sense to an agent, someone or something with the know-how to interpret
it.

The meaning was missing because the genome researchers ignored the
context of the genomes: the living cell within which genes are read
and their products are organised. The organisation that is responsible
for making sense of the information in the genes, an essential and
basic aspect of the living state, was taken for granted. What is the
nature of this complex dynamic process that knows how to make an
organism, using specific information from the genes?

Biology is returning to notions of space-time organisation as an
intrinsic aspect of the living condition, our old friends
morphogenetic fields. They are now described as complex networks of
molecules that somehow read and make sense of genes. These molecular
networks have intriguing properties, giving them some of the same
characteristics as words in a language.

Could it be that biology and culture are not so different after all;
that both are based on historical traditions and languages that are
used to construct patterns of relationship embodied in communities,
either of cells or of individuals? These self-organising activities
are certainly mysterious, but not unintelligible. My own work, with
many colleagues, cast morphogenetic fields in mathematical form that
revealed how space (morphology) and time (behaviour) get organised in
subtle but robust ways in developing organisms and communities.

Such coordinating patterns in living beings seem to be at the heart of
the creativity that drives both biological and cultural evolution.
Despite many differences between these fields, which need to be
clarified and distinguished rather than blurred, there may be
underlying commonalities that can unify biological and cultural
evolution rather than separating them.

This could even lead us to value other species of organism for their
wisdom in achieving coherent, sustainable relationships with other
species while remaining creative and innovative throughout evolution,
something we are signally failing to do in our culture with its
ecologically damaging style of living.
_________________________________________________________________

RUDY RUCKER
Mathematician, Computer Scientist; CyberPunk Pioneer; Novelist;
Author, Lifebox, the Seashell, and the Soul
l [rucker100.jpg]

Mind is a universally distributed quality

Panpsychism. Each object has a mind. Stars, hills, chairs, rocks,
scraps of paper, flakes of skin, molecules -- each of them possesses
the same inner glow as a human, each of them has singular inner
experiences and sensations.

I'm quite comfortable with the notion that everything is a
computation. But what to do about my sense that there's something
numinous about my inner experience? Panpsychism represents a
non-anthropocentric way out: mind is a universally distributed
quality.

Yes, the workings of a human brain are a deterministic computation
that could be emulated by any universal computer. And, yes, I sense
more to my mental phenomena than the rule-bound exfoliation of
reactions to inputs: this residue is the inner light, the raw
sensation of existence. But, no, that inner glow is not the exclusive
birthright of humans, nor is it solely limited to biological
organisms.

Note that panpsychism needn't say that universe is just one mind. We
can also say that each object has an individual mind. One way to
visualize the distinction between the many minds and the one mind is
to think of the world as a stained glass window with light shining
through each pane. The world's physical structures break the undivided
cosmic mind into a myriad of small minds, one in each object.

The minds of panpsychism can exist at various levels. As well as
having its own individuality, a person's mind would also be, for
instance, a hive mind based upon the minds of the body's cells and the
minds of the body's elementary particles.

Do the panpsychic minds have any physical correlates? On the one hand,
it could be that the mind is some substance that accumulates near
ordinary matter -- dark matter or dark energy are good candidates. On
the other hand, mind might simply be matter viewed in a special
fashion: matter experienced from the inside. Let me mention three
specific physical correlates that have been proposed for the mind.

Some have argued that the experience of mind results when a superposed
quantum state collapses into a pure state. It's an alluring metaphor,
but as a universal automatist, I'm of the opinion that quantum
mechanics is a stop-gap theory, destined to give way to a fully
deterministic theory based upon some digital precursor of spacetime.

David Skrbina, author of the clear and comprehensive book Panpsychism
in the West, suggests that we might think of a physical system as
determining a moving point in a multi-dimensional phase space that has
an axis for each of the system's measurable properties. He feels this
dynamic point represents the sense of unity characteristic of a mind.

As a variation on this theme, let me point out that, from the
universal automatist standpoint, every physical system can be thought
of as embodying a computation. And the majority of non-simple systems
embody universal computations, capable of emulating any other system
at all. It could be that having a mind is in some sense equivalent to
being capable of universal computation.

A side-remark. Even such very simple systems as a single electron may
in fact be capable of universal computation, if supplied with a steady
stream of structured input. Think of an electron in an oscillating
field; and by analogy think of a person listening to music or reading
an essay.

Might panpsychism be a distinction without a difference? Suppose we
identify the numinous mind with quantum collapse, with chaotic
dynamics, or with universal computation. What is added by claiming
that these aspects of reality are like minds?

I think empathy can supply an experiential confirmation of
panpsychism's reality. Just as I'm sure that I myself have a mind, I
can come to believe the same of another human with whom I'm in contact
-- whether face to face or via their creative work. And with a bit of
effort, I can identify with objects as well; I can see the objects in
the room around me as glowing with inner light. This is a pleasant
sensation; one feels less alone.

Could there ever be a critical experiment to test if panpsychism is
really true? Suppose that telepathy were to become possible, perhaps
by entangling a person's mental states with another system's states.
And then suppose that instead of telepathically contacting another
person, I were to contact a rock. At this point panpsychism would be
proved.

I still haven't said anything about why panpsychism is a dangerous
idea. Panpsychism, like other forms of higher consciousness, is
dangerous to business as usual. If my old car has the same kind of
mind as a new one, I'm less impelled to help the economy by buying a
new vehicle. If the rocks and plants on my property have minds, I feel
more respect for them in their natural state. If I feel myself among
friends in the universe, I'm less likely to overwork myself to earn
more cash. If my body will have a mind even after I'm dead, then death
matters less to me, and it's harder for the government to cow me into
submission.
_________________________________________________________________

STEVEN PINKER
Psychologist, Harvard University; Author, The Blank Slate
[pinker.100.jpg]

Groups of people may differ genetically in their average talents and
temperaments

The year 2005 saw several public appearances of what will I predict
will become the dangerous idea of the next decade: that groups of
people may differ genetically in their average talents and
temperaments.
* In January, Harvard president Larry Summers caused a firestorm
when he cited research showing that women and men have
non-identical statistical distributions of cognitive abilities and
life priorities.

* In March, developmental biologist Armand Leroi published an op-ed
in the New York Times rebutting the conventional wisdom that race
does not exist. (The conventional wisdom is coming to be known as
Lewontin's Fallacy: that because most genes may be found in all
human groups, the groups don't differ at all. But patterns of
correlation among genes do differ between groups, and different
clusters of correlated genes correspond well to the major races
labeled by common sense. )

* In June, the Times reported a forthcoming study by physicist Greg
Cochran, anthropologist Jason Hardy, and population geneticist
Henry Harpending proposing that Ashkenazi Jews have been
biologically selected for high intelligence, and that their
well-documented genetic diseases are a by-product of this
evolutionary history.

* In September, political scientist Charles Murray published an
article in Commentary reiterating his argument from The Bell Curve
that average racial differences in intelligence are intractable
and partly genetic.

Whether or not these hypotheses hold up (the evidence for gender
differences is reasonably good, for ethnic and racial differences much
less so), they are widely perceived to be dangerous. Summers was
subjected to months of vilification, and proponents of ethnic and
racial differences in the past have been targets of censorship,
violence, and comparisons to Nazis. Large swaths of the intellectual
landscape have been reengineered to try to rule these hypotheses out a
priori (race does not exist, intelligence does not exist, the mind is
a blank slate inscribed by parents). The underlying fear, that reports
of group differences will fuel bigotry, is not, of course, groundless.

The intellectual tools to defuse the danger are available. "Is" does
not imply "ought. " Group differences, when they exist, pertain to the
average or variance of a statistical distribution, rather than to
individual men and women. Political equality is a commitment to
universal human rights, and to policies that treat people as
individuals rather than representatives of groups; it is not an
empirical claim that all groups are indistinguishable. Yet many
commentators seem unwilling to grasp these points, to say nothing of
the wider world community.

Advances in genetics and genomics will soon provide the ability to
test hypotheses about group differences rigorously. Perhaps
geneticists will forbear performing these tests, but one shouldn't
count on it. The tests could very well emerge as by-products of
research in biomedicine, genealogy, and deep history which no one
wants to stop.

The human genomic revolution has spawned an enormous amount of
commentary about the possible perils of cloning and human genetic
enhancement. I suspect that these are red herrings. When people
realize that cloning is just forgoing a genetically mixed child for a
twin of one parent, and is not the resurrection of the soul or a
source of replacement organs, no one will want to do it. Likewise,
when they realize that most genes have costs as well as benefits (they
may raise a child's IQ but also predispose him to genetic disease),
"designer babies" will lose whatever appeal they have. But the
prospect of genetic tests of group differences in psychological traits
is both more likely and more incendiary, and is one that the current
intellectual community is ill-equipped to deal with.
_________________________________________________________________

RICHARD E. NISBETT
Professor of Psychology, Co-Director of the Culture and Cognition
Program, University of Michigan; Author, The Geography of Thought: How
Asians and Westerners Think Differently. . . And Why
[nisbett100.jpg]

Telling More Than We Can Know

Do you know why you hired your most recent employee over the
runner-up? Do you know why you bought your last pair of pajamas? Do
you know what makes you happy and unhappy?
Don't be too sure. The most important thing that social psychologists
have discovered over the last 50 years is that people are very
unreliable informants about why they behaved as they did, made the
judgment they did, or liked or disliked something. In short, we don't
know nearly as much about what goes on in our heads as we think. In
fact, for a shocking range of things, we don't know the answer to "Why
did I?" any better than an observer.

The first inkling that social psychologists had about just how
ignorant we are about our thinking processes came from the study of
cognitive dissonance beginning in the late 1950s. When our behavior is
insufficiently justified, we move our beliefs into line with the
behavior so as to avoid the cognitive dissonance we would otherwise
experience. But we are usually quite unaware that we have done that,
and when it is pointed out to us we recruit phantom reasons for the
change in attitude.
Beginning in the mid-1960s, social psychologists started doing
experiments about the causal attributions people make for their own
behavior. If you give people electric shocks, but tell them that you
have given them a pill that will produce the arousal symptoms that are
actually created by the shock, they will take much more shock than
subjects without the pill. They have attributed their arousal to the
pill and are therefore willing to take more shock. But if you ask them
why they took so much shock they are likely to say something like "I
used to work with electrical gadgets and I got a lot of shocks, so I
guess I got used to it."
In the 1970s social psychologists began asking whether people could be
accurate about why they make truly simple judgments and decisions --
such as why they like a person or an article of clothing.
For example, in one study experimenters videotaped a Belgian
responding in one of two modes to questions about his philosophy as a
teacher: he either came across as an ogre or a saint. They then showed
subjects one of the two tapes and asked them how much they liked the
teacher. Furthermore, they asked some of them whether the teacher's
accent had affected how much they liked him and asked others whether
how much they liked the teacher influenced how much they liked his
accent. Subjects who saw the ogre naturally disliked him a great deal,
and they were quite sure that his grating accent was one of the
reasons. Subjects who saw the saint realized that one of the reasons
they were so fond of him was his charming accent. Subjects who were
asked if their liking for the teacher could have influenced their
judgment of his accent were insulted by the question.
Does familiarity breed contempt? On the contrary, it breeds liking. In
the 1980s, social psychologists began showing people such stimuli as
Turkish words and Chinese ideographs and asking them how much they
liked them. They would show a given stimulus somewhere between one and
twenty-five times. The more the subjects saw the stimulus the more
they liked it. Needless to say, their subjects did not find it
plausible that the mere number of times they had seen a stimulus could
have affected their liking for it. (You're probably wondering if white
rats are susceptible to the mere familiarity effect.
The study has been done. Rats brought up listening to music by Mozart
prefer to move to the side of the cage that trips a switch allowing
them to listen to Mozart rather than Schoenberg. Rats raised on
Schoenberg prefer to be on the Schoenberg side. The rats were not
asked the reasons for their musical preferences.)
Does it matter that we often don't know what goes on in our heads and
yet believe that we do? Well, for starters, it means that we often
can't answer accurately crucial questions about what makes us happy
and what makes us unhappy. A social psychologist asked Harvard women
to keep a daily record for two months of their mood states and also to
record a number of potentially relevant factors in their lives
including amount of sleep the night before, the weather, general state
of health, sexual activity, and day of the week (Monday blues? TGIF?).
At the end of the period, subjects were asked to tell the
experimenters how much each of these factors tended to influence their
mood over the two month period. The results? Women's reports of what
influenced their moods were uncorrelated with what they had reported
on a daily basis. If a woman thought that her sexual activity had a
big effect, a check of her daily reports was just as likely to show
that it had no effect as that it did. To really rub it in, the
psychologist asked her subjects to report what influenced the moods of
someone they didn't know: She found that accuracy was just as great
when a woman was rated by a stranger as when rated by the woman
herself!
But if we were to just think really hard about reasons for behavior
and preferences might we be likely to come to the right conclusions?
Actually, just the opposite may often be the case. A social
psychologist asked people to choose which of several art posters they
liked best.
Some people were asked to analyze why they liked or disliked the
various posters and some were not asked, and everyone was given their
favorite poster to take home. Two weeks later the psychologist called
people up and asked them how much they liked the art poster they had
chosen. Those who did not analyze their reasons liked their posters
better than those who did.
It's certainly scary to think that we're ignorant of so much of what
goes on in our heads, though we're almost surely better off taking
with a large quantity of salt what we and others say about motives and
reasons. Skepticism about our ability to read our minds is safer than
certainty that we can.
Still, the idea that we have little access to the workings of our
minds is a dangerous one. The theories of Copernicus and Darwin were
dangerous because they threatened, respectively, religious conceptions
of the centrality of humans in the cosmos and the divinity of humans.
Social psychologists are threatening a core conviction of the
Enlightenment -- that humans are perfectible through the exercise of
reason. If reason cannot be counted on to reveal the causes of our
beliefs, behavior and preferences, then the idea of human
perfectibility is to that degree diminished.
_________________________________________________________________

ROBERT R. PROVINE
Psychologist and Neuroscientist, University of Maryland; Author,
Laughter
[provine100.jpg]

This is all there is

The empirically testable idea that the here and now is all there is
and that life begins at birth and ends at death is so dangerous that
it has cost the lives of millions and threatens the future of
civilization. The danger comes not from the idea itself, but from its
opponents, those religious leaders and followers who ruthlessly
advocate and defend their empirically improbable afterlife and
man-in-the-sky cosmological perspectives.

Their vigor is understandable. What better theological franchise is
there than the promise of everlasting life, with deluxe trimmings?
Religious followers must invest now with their blood and sweat, with
their big payoff not due until the after-life. Postmortal rewards cost
theologians nothing--I'll match your heavenly choir and raise you 72
virgins.

Some franchise! This is even better than the medical profession, a
calling with higher overhead, that has gained control of birth, death
and pain. Whether the religious brand is Christianity or Islam, the
warring continues, with a terrible fate reserved for heretics who
threaten the franchise from within. Worse may be in store for those
who totally reject the man-in-the-sky premise and its afterlife
trappings. All of this trouble over accepting what our senses tell
us--that this is all there is.

Resolution of religious conflict is impossible because there is no
empirical test of the ghostly, and many theologians prey,
intentionally or not, upon the fears, superstitions, irrationality,
and herd tendencies that are our species' neurobehavioral endowment.
Religious fundamentalism inflames conflict and prevents solution--the
more extreme and irrational one's position, the stronger one's faith,
and, when possessing absolute truth, compromise is not an option.

Resolution of conflicts between religions and associated cultures is
less likely to come from compromise than from the pursuit of
superordinate goals, common, overarching, objectives that extend
across nations and cultures, and direct our competitive spirit to
further the health, well-being, and nobility of everyone. Public
health and science provide such unifying goals. I offer two examples.

Health Initiative. A program that improves the health of all people,
especially those in developing nations, may find broad support,
especially with the growing awareness of global culture and the
looming specter of a pandemic. Public health programs bridge
religious, political, and cultural divides. No one wants to see their
children die. Conflicts fall away when cooperation offers a better
life for all concerned. This is also the most effective anti-terrorism
strategy, although one probably unpopular with the military industrial
complex on one side, and terrorist agitators on the other.

Space Initiative. Space exploration expands our cosmos and increases
our appreciation of life on Earth and its finite resources. Space
exploration is one of our species' greatest achievements. Its pursuit
is a goal of sufficient grandeur to unite people of all nations.

This is all there is. The sooner we accept this dangerous idea, the
sooner we can get on with the essential task of making the most of our
lives on this planet.
_________________________________________________________________

DONALD HOFFMAN
Cognitive Scientist, UC, Irvine; Author, Visual Intelligence
[hoffman100.jpg]

A spoon is like a headache

A spoon is like a headache. This is a dangerous idea in sheep's
clothing. It consumes decrepit ontology, preserves methodological
naturalism, and inspires exploration for a new ontology, a vehicle
sufficiently robust to sustain the next leg of our search for a theory
of everything.

How could a spoon and a headache do all this? Suppose I have a
headache, and I tell you about it. It is, say, a pounding headache
that started at the back of the neck and migrated to encompass my
forehead and eyes. You respond empathetically, recalling a similar
headache you had, and suggest a couple remedies. We discuss our
headaches and remedies a bit, then move on to other topics.

Of course no one but me can experience my headaches, and no one but
you can experience yours. But this posed no obstacle to our meaningful
conversation. You simply assumed that my headaches are relevantly
similar to yours, and I assumed the same about your headaches. The
fact that there is no "public headache," no single headache that we
both experience, is simply no problem.

A spoon is like a headache. Suppose I hand you a spoon. It is common
to assume that the spoon I experience during this transfer is
numerically identical to the spoon you experience. But this assumption
is false. No one but me can experience my spoon, and no one but you
can experience your spoon. But this is no problem. It is enough for me
to assume that your spoon experience is relevantly similar to mine.
For effective communication, no public spoon is necessary, just like
no public headache is necessary. Is there a "real spoon," a
mind-independent physical object that causes our spoon experiences and
resembles our spoon experiences? This is not only unnecessary but
unlikely. It is unlikely that the visual experiences of homo sapiens,
shaped to permit survival in a particular range of niches, should
miraculously also happen to resemble the true nature of a
mind-independent realm. Selective pressures for survival do not,
except by accident, lead to truth.

One can have a kind of objectivity without requiring public objects.
In special relativity, the measurements, and thus the experiences, of
mass, length and time differ from observer to observer, depending on
their relative velocities. But these differing experiences can be
related by the Lorentz transformation. This is all the objectivity one
can have, and all one needs to do science.

Once one abandons public physical objects, one must reformulate many
current open problems in science. One example is the mind-brain
relation. There are no public brains, only my brain experiences and
your brain experiences. These brain experiences are just the
simplified visual experiences of homo sapiens, shaped for survival in
certain niches. The chances that our brain experiences resemble some
mind-independent truth are remote at best, and those who would claim
otherwise must surely explain the miracle. Failing a clever
explanation of this miracle, there is no reason to believe brains
cause anything, including minds. And here the wolf unzips the sheep
skin, and darts out into the open. The danger becomes apparent the
moment we switch from boons to sprains. Oh, pardon the spoonerism.
_________________________________________________________________

MARC D. HAUSER
Psychologist and Biologist, Harvard University: Author, Wild Minds
[hauser101.jpg]

A universal grammar of [mental] life

The recent explosion of work in molecular evolution and developmental
biology has, for the first time, made it possible to propose a radical
new theory of mental life that if true, will forever rewrite the
textbooks and our way of thinking about our past and future. It
explains both the universality of our thoughts as well as the unique
signatures that demarcate each human culture, past, present and
future.

The theory I propose is that human mental life is based on a few
simple, abstract, yet expressively powerful rules or computations
together with an instructive learning mechanism that prunes the range
of possible systems of language, music, mathematics, art, and morality
to a limited set of culturally expressed variants. In many ways, this
view isn't new or radical. It stems from thinking about the seemingly
constrained ways in which relatively open ended or generative systems
of expression create both universal structure and limited variation.

Unfortunately, what appears to be a rather modest proposal on some
counts, is dangerous on another. It is dangerous to those who abhor
biologically grounded theories on the often misinterpreted perspective
that biology determines our fate, derails free will, and erases the
soul. But a look at systems other than the human mind makes it
transparently clear that the argument from biological endowment does
not entail any of these false inferences.

For example, we now understand that our immune systems don't learn
from the environment how to tune up to the relevant problems. Rather,
we are equipped with a full repertoire of antibodies to deal with a
virtually limitless variety of problems, including some that have not
yet even emerged in the history of life on earth. This initially seems
counter-intuitive: how could the immune system have evolved to predict
the kinds of problems we might face? The answer is that it couldn't.

What it evolved instead was a set of molecular computations that, in
combination with each other, can handle an infinitely larger set of
conditions than any single combination on its own. The role of the
environment is as instructor, functionally telling the immune system
about the current conditions, resulting in a process of pairing down
of initial starting options.

The pattern of change observed in the immune system, characterized by
an initial set of universal computations or options followed by an
instructive process of pruning, is seen in systems as disparate as the
genetic mechanisms underlying segmented body parts in vertebrates, the
basic body plan of land plants involving the shoot system of stem and
leaves, and song development in birds. Songbirds are particularly
interesting as the system for generating a song seems to be analogous
in important ways to our capacity to generate a specific language.
Humans and songbirds start with a species-specific capacity to build
language and song respectively, and this capacity has limitless
expressive power. Upon delivery and hatching, and possibly a bit
before, the local acoustic environment begins the process of
instruction, pruning the possible languages and songs down to one or
possibly two. The common thread here is a starting state of universal
computations or options followed by an instructive process of pruning,
ending up with distinctive systems that share an underlying common
core. Hard to see how anyone could find this proposal dangerous or
off-putting, or even wrong!

Now jump laterally, and make the move to aesthetics and ethics. Our
minds are endowed with universal computations for creating and judging
art, music, and morally relevant actions. Depending upon where we are
born, we will find atonal music pleasing or disgusting, and
infanticide obligatory or abhorrent. The common or universal core is,
for music, a set of rules for combining together notes to alter our
emotions, and for morality, a different set of rules for combining the
causes and consequences of action to alter our permissibility
judgments.

To say that we are endowed with a universal moral sense is not to say
that we will do the right or wrong thing, with any consistency. The
idea that there is a moral faculty, grounded in our biology, says
nothing at all about the good, the bad or the ugly. What it says is
that we have evolved particular biases, designed as a function of
selection for particular kinds of fit to the environment, under
particular constraints. But nothing about this claim leads to the good
or the right or the permissible.

The reason this has to be the case is twofold: there is not only
cultural variation but environmental variation over evolutionary time.
What is good for us today may not be good for us tomorrow. But the key
insight delivered by the nativist perspective is that we must
uderstand the nature of our biases in order to work toward some good
or better world, realizing all along that we are constrained.
Appreciating the choreography between universal options and
instructive pruning is only dangerous if misused to argue that our
evolved nature is good, and what is good is right. That's bad.
_________________________________________________________________

RAY KURZWEIL
Inventor and Technologist; Author, The Singularity Is Near: When
Humans Transcend Biology
[kurzweil.100.jpg]

The near-term inevitability of radical life extension and expansion

My dangerous idea is the near-term inevitability of radical life
extension and expansion. The idea is dangerous, however, only when
contemplated from current linear perspectives.

First the inevitability: the power of information technologies is
doubling each year, and moreover comprises areas beyond computation,
most notably our knowledge of biology and of our own intelligence. It
took 15 years to sequence HIV and from that perspective the genome
project seemed impossible in 1990. But the amount of genetic data we
were able to sequence doubled every year while the cost came down by
half each year.

We finished the genome project on schedule and were able to sequence
SARS in only 31 days. We are also gaining the means to reprogram the
ancient information processes underlying biology. RNA interference can
turn genes off by blocking the messenger RNA that express them. New
forms of gene therapy are now able to place new genetic information in
the right place on the right chromosome. We can create or block
enzymes, the work horses of biology. We are reverse-engineering -- and
gaining the means to reprogram -- the information processes underlying
disease and aging, and this process is accelerating, doubling every
year. If we think linearly, then the idea of turning off all disease
and aging processes appears far off into the future just as the genome
project did in 1990. On the other hand, if we factor in the doubling
of the power of these technologies each year, the prospect of radical
life extension is only a couple of decades away.

In addition to reprogramming biology, we will be able to go
substantially beyond biology with nanotechnology in the form of
computerized nanobots in the bloodstream. If the idea of programmable
devices the size of blood cells performing therapeutic functions in
the bloodstream sounds like far off science fiction, I would point out
that we are doing this already in animals. One scientist cured type I
diabetes in rats with blood cell sized devices containing 7 nanometer
pores that let insulin out in a controlled fashion and that block
antibodies. If we factor in the exponential advance of computation and
communication (price-performance multiplying by a factor of a billion
in 25 years while at the same time shrinking in size by a factor of
thousands), these scenarios are highly realistic.

The apparent dangers are not real while unapparent dangers are real.
The apparent dangers are that a dramatic reduction in the death rate
will create over population and thereby strain energy and other
resources while exacerbating environmental degradation. However we
only need to capture 1 percent of 1 percent of the sunlight to meet
all of our energy needs (3 percent of 1 percent by 2025) and
nanoengineered solar panels and fuel cells will be able to do this,
thereby meeting all of our energy needs in the late 2020s with clean
and renewable methods. Molecular nanoassembly devices will be able to
manufacture a wide range of products, just about everything we need,
with inexpensive tabletop devices. The power and price-performance of
these systems will double each year, much faster than the doubling
rate of the biological population. As a result, poverty and pollution
will decline and ultimately vanish despite growth of the biological
population.

There are real downsides, however, and this is not a utopian vision.
We have a new existential threat today in the potential of a
bioterrorist to engineer a new biological virus. We actually do have
the knowledge to combat this problem (for example, new vaccine
technologies and RNA interference which has been shown capable of
destroying arbitrary biological viruses), but it will be a race. We
will have similar issues with the feasibility of self-replicating
nanotechnology in the late 2020s. Containing these perils while we
harvest the promise is arguably the most important issue we face.

Some people see these prospects as dangerous because they threaten
their view of what it means to be human. There is a fundamental
philosophical divide here. In my view, it is not our limitations that
define our humanity. Rather, we are the species that seeks and
succeeds in going beyond our limitations.
_________________________________________________________________

HAIM HARARI
Physicist, former President, Weizmann Institute of Science
[harari100.jpg]

Democracy may be on its way out

Democracy may be on its way out. Future historians may determine that
Democracy will have been a one-century episode. It will disappear.
This is a sad, truly dangerous, but very realistic idea (or, rather,
prediction).

Falling boundaries between countries, cross border commerce, merging
economies, instant global flow of information and numerous other
features of our modern society, all lead to multinational structures.
If you extrapolate this irreversible trend, you get the entire planet
becoming one political unit. But in this unit, anti-democracy forces
are now a clear majority. This majority increases by the day, due to
demographic patterns. All democratic nations have slow, vanishing or
negative population growth, while all anti-democratic and uneducated
societies multiply fast. Within democratic countries, most
well-educated families remain small while the least educated families
are growing fast. This means that, both at the individual level and at
the national level, the more people you represent, the less economic
power you have. In a knowledge based economy, in which the number of
working hands is less important, this situation is much more
non-democratic than in the industrial age. As long as upward mobility
of individuals and nations could neutralize this phenomenon, democracy
was tenable. But when we apply this analysis to the entire planet, as
it evolves now, we see that democracy may be doomed.

To these we must add the regrettable fact that authoritarian
multinational corporations, by and large, are better managed than
democratic nation states. Religious preaching, TV sound bites, cross
boundary TV incitement and the freedom of spreading rumors and lies
through the internet encourage brainwashing and lack of rational
thinking. Proportionately, more young women are growing into societies
which discriminate against them than into more egalitarian societies,
increasing the worldwide percentage of women treated as second class
citizens. Educational systems in most advanced countries are in a deep
crisis while modern education in many developing countries is almost
non-existent. A small well-educated technological elite is becoming
the main owner of intellectual property, which is, by far, the most
valuable economic asset, while the rest of the world drifts towards
fanaticism of one kind or another. Add all of the above and the
unavoidable conclusion is that Democracy, our least bad system of
government, is on its way out.

Can we invent a better new system? Perhaps. But this cannot happen if
we are not allowed to utter the sentence: "There may be a political
system which is better than Democracy". Today's political correctness
does not allow one to say such things. The result of this prohibition
will be an inevitable return to some kind of totalitarian rule,
different from that of the emperors, the colonialists or the landlords
of the past, but not more just. On the other hand, open and honest
thinking about this issue may lead either to a gigantic worldwide
revolution in educating the poor masses, thus saving democracy, or to
a careful search for a just (repeat, just) and better system.
I cannot resist a cheap parting shot: When, in the past two years,
Edge asked for brilliant ideas you believe in but cannot prove, or for
proposing new exciting laws, most answers related to science and
technology. When the question is now about dangerous ideas, almost all
answers touch on issues of politics and society and not on the "hard
sciences". Perhaps science is not so dangerous, after all.
_________________________________________________________________

DAVID G. MYERS
Social Psychologist; Co-author (with Letha Scanzoni); What God has
Joined Together: A Christian Case for Gay Marriage
[myers100.jpg]

A marriage option for all

Much as others have felt compelled by evidence to believe in human
evolution or the warming of the planet, I feel compelled by evidence
to believe a) that sexual orientation is a natural, enduring
disposition and b) that the world would be a happier and healthier
place if, for all people, romantic love, sex, and marriage were a
package.
In my Midwestern social and religious culture, the words "for all
people" transform a conservative platitude into a dangerous idea, over
which we are fighting a culture war. On one side are traditionalists,
who feel passionately about the need to support and renew marriage. On
the other side are progressives, who assume that our sexual
orientation is something we did not choose and cannot change, and that
we all deserve the option of life within a covenant partnership.

I foresee a bridge across this divide as folks on both the left and
the right engage the growing evidence of our panhuman longing for
belonging, of the benefits of marriage, and of the biology and
persistence of sexual orientation. We now have lots of data showing
that marriage is conducive to healthy adults, thriving children, and
flourishing communities. We also have a dozen discoveries of
gay-straight differences in everything from brain physiology to skill
at mentally rotating geometric figures. And we have an emerging
professional consensus that sexual reorientation therapies seldom
work.

More and more young adults -- tomorrow's likely majority, given
generational succession -- are coming to understand this evidence, and
to support what in the future will not seem so dangerous: a marriage
option for all.
_________________________________________________________________

CLAY SHIRKY
Social & Technology Network Topology Researcher; Adjunct Professor,
NYU Graduate School of Interactive Telecommunications Program (ITP)
[shirkey100.jpg]

Free will is going away. Time to redesign society to take that into
account.

In 2002, a group of teenagers sued McDonald's for making them fat,
charging, among other things, that McDonald's used promotional
techniques to get them to eat more than they should. The suit was
roundly condemned as an the erosion of the sense of free will and
personal responsibility in our society. Less widely remarked upon was
that the teenagers were offering an accurate account of human
behavior.

Consider the phenomenon of 'super-sizing', where a restaurant patron
is offered the chance to increase the portion size of their meal for
some small amount of money. This presents a curious problem for the
concept of free will -- the patron has already made a calculation
about the amount of money they are willing to pay in return for a
particular amount of food. However, when the question is re-asked, --
not "Would you pay $5.79 for this total amount of food?" but "Would
you pay an additional 30 cents for more french fries?" -- patrons
often say yes, despite having answered "No" moments before to an
economically identical question.

Super-sizing is expressly designed to subvert conscious judgment, and
it works. By re-framing the question, fast food companies have found
ways to take advantages of weaknesses in our analytical apparatus,
weaknesses that are being documented daily in behavioral economics and
evolutionary psychology.

This matters for more than just fat teenagers. Our legal, political,
and economic systems, the mechanisms that run modern society, all
assume that people are uniformly capable of consciously modulating
their behaviors. As a result, we regard decisions they make as being
valid, as with elections, and hold them responsible for actions they
take, as in contract law or criminal trials. Then, in order to get
around the fact that some people obviously aren't capable of
consciously modulating their behavior, we carve out ad hoc exemptions.
In U.S. criminal law, a 15 year old who commits a crime is treated
differently than a 16 year old. A crime committed in the heat of the
moment is treated specially. Some actions are not crimes because their
perpetrator is judged mentally incapable, whether through
developmental disabilities or other forms of legally defined insanity.

This theoretical divide, between the mass of people with a uniform
amount of free will and a small set of exceptional individuals, has
been broadly stable for centuries, in part because it was based on
ignorance. As long as we were unable to locate any biological source
of free will, treating the mass of people as if each of them had the
same degree of control over their lives made perfect sense; no more
refined judgments were possible. However, that binary notion of free
will is being eroded as our understanding of the biological
antecedents of behavior improves.

Consider laws concerning convicted pedophiles. Concern about their
recidivism rate has led to the enactment of laws that restrict their
freedom based on things they might do in the future, even though this
expressly subverts the notion of free will in the judicial system. The
formula here -- heinousness of crime x likelihood of repeat offense --
creates a new, non-insane class of criminals whose penalty is indexed
to a perceived lack of control over themselves.

But pedophilia is not unique in it's measurably high recidivism rate.
All rapists have higher than average recidivism rates. Thieves of all
varieties are likelier to become repeat offenders if they have short
time horizons or poor impulse control. Will we keep more kinds of
criminals constrained after their formal sentence is served, as we
become better able to measure the likely degree of control they have
over their own future actions? How can we, if we are to preserve the
idea of personal responsibility? How can we not, once we are able to
quantify the risk?

Criminal law is just one area where our concept of free will is
eroding. We know that men make more aggressive decisions after they
have been shown pictures of attractive female faces. We know women are
more likely to commit infidelity on days they are fertile. We know
that patients committing involuntary physical actions routinely (and
incorrectly) report that they decided to undertake those actions, in
order to preserve their sense that they are in control. We know that
people will drive across town to save $10 on a $50 appliance, but not
on a $25,000 car. We know that the design of the ballot affects a
voter's choices. And we are still in the early days of even
understanding these effects, much less designing everything from sales
strategies to drug compounds to target them.

Conscious self-modulation of behavior is a spectrum. We have treated
it as a single property -- you are either capable of free will, or you
fall into an exceptional category -- because we could not identify,
measure, or manipulate the various components that go into such
self-modulation. Those days are now ending, and everyone from
advertisers to political consultants increasingly understands, in
voluminous biological detail, how to manipulate consciousness in ways
that weaken our notion of free will.

In the coming decades, our concept of free will, based as it is on
ignorance of its actual mechanisms, will be destroyed by what we learn
about the actual workings of the brain. We can wait for that
collision, and decide what to do then, or we can begin thinking
through what sort of legal, political, and economic systems we need in
a world where our old conception of free will is rendered inoperable.
_________________________________________________________________

MICHAEL SHERMER
Publisher of Skeptic magazine, monthly columnist for Scientific
American; Author, Science Friction
[shermer100.jpg]

Where goods cross frontiers, armies won't
Where goods cross frontiers, armies won't. Restated: where economic
borders are porous between two nations, political borders become
impervious to armies.

Data from the new sciences of evolutionary economics, behavioral
economics, and neuroeconomics reveals that when people are free to
cooperate and trade (such as in game theory protocols) they establish
trust that is reinforced through neural pathways that release such
bonding hormones as oxytocin. Thus, modern biology reveals that where
people are free to cooperate and trade they are less likely to fight
and kill those with whom they are cooperating and trading.

My dangerous idea is a solution to what I call the "really hard
problem": how best should we live? My answer: A free society, defined
as free-market economics and democratic politics -- fiscal
conservatism and social liberalism -- which leads to the greatest
liberty for the greatest number. Since humans are, by nature, tribal,
the overall goal is to expand the concept of the tribe to include all
members of the species into a global free society. Free trade between
all peoples is the surest way to reach this goal.

People have a hard time accepting free market economics for the same
reason they have a hard time accepting evolution: it is
counterintuitive. Life looks intelligently designed, so our natural
inclination is to infer that there must be an intelligent designer --
a God. Similarly, the economy looks designed, so our natural
inclination is to infer that we need a designer -- a Government. In
fact, emergence and complexity theory explains how the principles of
self-organization and emergence cause complex systems to arise from
simple systems without a top-down designer.

Charles Darwin's natural selection is Adam Smith's invisible hand.
Darwin showed how complex design and ecological balance were
unintended consequences of individual competition among organisms.
Smith showed how national wealth and social harmony were unintended
consequences of individual competition among people. Nature's economy
mirrors society's economy. Thus, integrating evolution and economics
-- what I call evonomics -- reveals that an old economic doctrine is
supported by modern biology.
_________________________________________________________________

ARNOLD TREHUB
Psychologist, University of Massachusetts, Amherst; Author, The
Cognitive Brain
[trehub100.jpg]

Modern science is a product of biology

The entire conceptual edifice of modern science is a product of
biology. Even the most basic and profound ideas of science -- think
relativity, quantum theory, the theory of evolution -- are generated
and necessarily limited by the particular capacities of our human
biology. This implies that the content and scope of scientific
knowledge is not open-ended.
_________________________________________________________________

ROGER C. SCHANK
Psychologist & Computer Scientist; Chief Learning Officer, Trump
University; Author, Making Minds Less Well Educated than Our Own
[schank100.jpg]
No More Teacher's Dirty Looks

After a natural disaster, the newscasters eventually excitedly
announce that school is finally open so no matter what else is
terrible where they live, the kids are going to school. I always feel
sorry for the poor kids.

My dangerous idea is one that most people immediately reject without
giving it serious thought: school is bad for kids -- it makes them
unhappy and as tests show -- they don't learn much.

When you listen to children talk about school you easily discover what
they are thinking about in school: who likes them, who is being mean
to them, how to improve their social ranking, how to get the teacher
to treat them well and give them good grades.

Schools are structured today in much the same way as they have been
for hundreds of years. And for hundreds of years philosophers and
others have pointed out that school is really a bad idea:

We are shut up in schools and college recitation rooms for ten or
fifteen years, and come out at last with a belly full of words and
do not know a thing. -- Ralph Waldo Emerson

Education is an admirable thing, but it is well to remember from
time to time that nothing that is worth knowing can be taught. --
Oscar Wilde

Schools should simply cease to exist as we know them. The Government
needs to get out of the education business and stop thinking it knows
what children should know and then testing them constantly to see if
they regurgitate whatever they have just been spoon fed.

The Government is and always has been the problem in education:

If the government would make up its mind to require for every child
a good education, it might save itself the trouble of providing
one. It might leave to parents to obtain the education where and
how they pleased, and content itself with helping to pay the school
fees of the poorer classes of children, and defraying the entire
school expenses of those who have no one else to pay for them. --
JS Mill

First, God created idiots. That was just for practice. Then He
created school boards. -- Mark Twain

Schools need to be replaced by safe places where children can go to
learn how to do things that they are interested in learning how to do.
Their interests should guide their learning. The government's role
should be to create places that are attractive to children and would
cause them to want to go there.

Whence it comes to pass, that for not having chosen the right
course, we often take very great pains, and consume a good part of
our time in training up children to things, for which, by their
natural constitution, they are totally unfit. -- Montaigne

We had a President many years ago who understood what education is
really for. Nowadays we have ones that make speeches about the
Pythagorean Theorem when we are quite sure they don't know anything
about any theorem.

There are two types of education. . . One should teach us how to
make a living, And the other how to live. -- John Adams

Over a million students have opted out of the existing school system
and are now being home schooled. The problem is that the states
regulate home schooling and home schooling still looks an awful lot
like school.

We need to stop producing a nation of stressed out students who learn
how to please the teacher instead of pleasing themselves. We need to
produce adults who love learning, not adults who avoid all learning
because it reminds them of the horrors of school. We need to stop
thinking that all children need to learn the same stuff. We need to
create adults who can think for themselves and are not convinced about
how to understand complex situations in simplistic terms that can be
rendered in a sound bite.

Just call school off. Turn them all into apartment houses.
_________________________________________________________________

SUSAN BLACKMORE
Psychologist and Skeptic; Author, Consciousness: An Introduction
[blackmore.100.jpg]
Everything is pointless

We humans can, and do, make up our own purposes, but ultimately the
universe has none. All the wonderfully complex, and beautifully
designed things we see around us were built by the same purposeless
process -- evolution by natural selection. This includes everything
from microbes and elephants to skyscrapers and computers, and even our
own inner selves.

People have (mostly) got used to the idea that living things were
designed by natural selection, but they have more trouble accepting
that human creativity is just the same process operating on memes
instead of genes. It seems, they think, to take away uniqueness,
individuality and "true creativity".

Of course it does nothing of the kind; each person is unique even if
that uniqueness is explained by their particular combination of genes,
memes and environment, rather than by an inner conscious self who is
the fount of creativity.
_________________________________________________________________

DAVID LYKKEN
Behavioral geneticist and Emeritus Professor of Psychology, University
of Minnesota; Author, Happiness
[lykken100.jpg]
Laws requiring parental licensure

I believe that, during my grandchildren's lifetimes, the U.S. Supreme
Court will find a way to approve laws requiring parental licensure.

Traditional societies in which children are socialized collectively,
the method to which our species is evolutionarily adapted, have very
little crime. In the modern U.S., the proportion of fatherless
children, living with unmarried mothers, currently some 10 million in
all, has increased more than 400% since 1960 while the violent crime
rate rose 500% by 1994, before dipping slightly due to a delayed but
equal increase in the number of prison inmates (from 240,000 to 1.4
million.) In 1990, across the 50 States, the correlation between the
violent crime rate and the proportion of illegitimate births was 0.70.
About 70% of incarcerated delinquents, of teen-age pregnancies, of
adolescent runaways, involve (I think result from) fatherless rearing.
Because these frightening curves continue to accelerate, I believe we
must eventually confront the need for parental licensure -- you can't
keep that newborn unless you are 21, married and self-supporting --
not just for society's safety but so those babies will have a chance
for life, liberty, and the pursuit of happiness.
_________________________________________________________________

CLIFFORD PICKOVER
Author, Sex, Drugs, Einstein, and Elves
[pickover100.jpg]
We are all virtual
Our desire for entertaining virtual realities is increasing.  As our
understanding of the human brain also accelerates, we will create both
imagined realities and a set of memories to support these
simulacrums.  For example, someday it will be possible to simulate
your visit to the Middle Ages and, to make the experience realistic,
we may wish to ensure that you believe yourself to actually be in the
Middle Ages. False memories may be implanted, temporarily overriding
your real memories. This should be easy to do in the future -- given
that we can already coax the mind to create richly detailed virtual
worlds filled with ornate palaces and strange beings through the use
of the drug DMT (dimethyltryptamine).  In other words, the brains of
people who take DMT appear to access a treasure chest of images and
experience that typically include jeweled cities and temples, angelic
beings, feline shapes, serpents, and shiny metals. When we understand
the brain better, we will be able to safely generate more controlled
visions.
Our brains are also capable of simulating complex worlds when we
dream.  For example, after I watched a movie about people on a coastal
town during the time of the Renaissance, I was "transported" there
later that night while in a dream. The mental simulation of the
Renaissance did not have to be perfect, and I'm sure that there were
myriad flaws.  However, during that dream I believed I was in the
Renaissance.

If we understood the nature of how the mind induces the conviction of
reality, even when strange, nonphysical events happen in the dreams,
we could use this knowledge to ensure that your simulated trip to the
Middle Ages seemed utterly real, even if the simulation was imperfect.
It will be easy to create seemingly realistic virtual realities
because we don't have to be perfect or even good with respect to the
accuracy of our simulations in order to make them seem real.  After
all, our nightly dreams usually seem quite real even if upon awakening
we realize that logical or structural inconsistencies existed in the
dream.
In the future, for each of your own real lives, you will personally
create ten simulated lives. Your day job is a computer programmer for
IBM. However, after work, you'll be a knight with shining armor in the
Middle Ages, attending lavish banquets, and smiling at wandering
minstrels and beautiful princesses. The next night, you'll be in the
Renaissance, living in your home on the Amalfi coast of Italy,
enjoying a dinner of plover, pigeon, and heron.
If this ratio of one real life to ten simulated lives turned out to be
representative of human experience, this means that right now, you
only have a one in ten chance of being alive on the actual date of
today.
_________________________________________________________________

JOHN ALLEN PAULOS
Professor of Mathematics, Temple University, Philadelphia; Author, A
Mathematician Plays the Stock Market
[paulos100.jpg]
The self is a conceptual chimera
Doubt that a supernatural being exists is banal, but the more radical
doubt that we exist, at least as anything more than nominal,
marginally integrated entities having convenient labels like "Myrtle"
and "Oscar," is my candidate for Dangerous Idea. This is, of course,
Hume's idea -- and Buddha's as well -- that the self is an
ever-changing collection of beliefs, perceptions, and attitudes, that
it is not an essential and persistent entity, but rather a conceptual
chimera. If this belief ever became widely and viscerally felt
throughout a society -- whether because of advances in neurobiology,
cognitive science, philosophical insights, or whatever -- its effects
on that society would be incalculable. (Or so this assemblage of
beliefs, perceptions, and attitudes sometimes thinks.)
_________________________________________________________________

JAMES O'DONNELL
Classicist; Cultural Historian; Provost, Georgetown University;
Author, Avatars of the Word
[odonnell100.jpg]

Marx was right: the "state" will evaporate and cease to have useful
meaning as a form of human organization

From the earliest Babylonian and Chinese moments of "civilization", we
have agreed that human affairs depend on an organizing power in the
hands of a few people (usually with religious charisma to undergird
their authority) who reside in a functionally central location.
"Political science" assumes in its etymology the "polis" or city-state
of Greece as the model for community and government.

But it is remarkable how little of human excellence and achievement
has ever taken place in capital cities and around those elites, whose
cultural history is one of self-mockery and implicit acceptance of the
marginalization of the powerful. Borderlands and frontiers (and even
suburbs) are where the action is.

But as long as technologies of transportation and military force
emphasized geographic centralization and concentration of forces, the
general or emperor or president in his capital with armies at his beck
and call was the most obvious focus of power. Enlightened government
constructed mechanisms to restrain and channel such centralized
authority, but did not effectively challenge it.

So what advantage is there today to the nation state? Boundaries
between states enshrine and exacerbate inequalities and prevent the
free movement of peoples. Large and prosperous state and state-related
organizations and locations attract the envy and hostility of others
and are sitting duck targets for terrorist action. Technologies of
communication and transportation now make geographically-defined
communities increasingly irrelevant and provide the new elites and new
entrepreneurs with ample opportunity to stand outside them. Economies
construct themselves in spite of state management and money flees
taxation as relentlessly as water follows gravity.

Who will undergo the greatest destabilization as the state evaporates
and its artificial protections and obstacles disappear? The sooner it
happens, the more likely it is to be the United States. The longer it
takes ... well, perhaps the new Chinese empire isn't quite the
landscape-dominating leviathan of the future that it wants to be.
Perhaps in the end it will be Mao who was right, and a hundred flowers
will bloom there.
_________________________________________________________________

PHILIP ZIMBARDO
Professor Emeritus of Psychology at Stanford University; Author:
Shyness
[zimbardo100.jpg]

The banality of evil is matched by the banality of heroism
Those people who become perpetrators of evil deeds and those who
become perpetrators of heroic deeds are basically alike in being just
ordinary, average people.

The banality of evil is matched by the banality of heroism. Both are
not the consequence of dispositional tendencies, not special inner
attributes of pathology or goodness residing within the human psyche
or the human genome. Both emerge in particular situations at
particular times when situational forces play a compelling role in
moving individuals across the decisional line from inaction to action.

There is a decisive decisional moment when the individual is caught up
in a vector of forces emanating from the behavioral context. Those
forces combine to increase the probability of acting to harm others or
acting to help others. That decision may not be consciously planned or
taken mindfully, but impulsively driven by strong situational forces
external to the person. Among those action vectors are group pressures
and group identity, diffusion of responsibility, temporal focus on the
immediate moment without entertaining costs and benefits in the
future, among others.

The military police guards who abused prisoners at Abu Ghraib and the
prison guards in my Stanford Prison experiment who abused their
prisoners illustrate the "Lord of the Flies" temporary transition of
ordinary individuals into perpetrators of evil. We set aside those
whose evil behavior is enduring and extensive, such as tyrants like
Idi Amin, Stalin and Hitler. Heroes of the moment are also contrasted
with lifetime heroes.

The heroic action of Rosa Parks in a Southern bus, of Joe Darby in
exposing the Abu Ghraib tortures, of NYC firefighters at the World
Trade Center's disaster are acts of bravery at that time and place.
The heroism of Mother Teresa, Nelson Mandela, and Gandhi is replete
with valorous acts repeated over a lifetime. That chronic heroism is
to acute heroism as valour is to bravery.

This view implies that any of us could as easily become heroes as
perpetrators of evil depending on how we are impacted by situational
forces. We then want to discover how to limit, constrain, and prevent
those situational and systemic forces that propel some of us toward
social pathology.

It is equally important for our society to foster the heroic
imagination in our citizens by conveying the message that anyone is a
hero-in-waiting who will be counted upon to do the right thing when
the time comes to make the heroic decision to act to help or to act to
prevent harm.
_________________________________________________________________

RICHARD FOREMAN
Founder & Director, Ontological-Hysteric Theater
[foreman100.jpg]
Radicalized relativity

In my area of the arts and humanities, the most dangerous idea (and
the one under who's influence I have operated throughout my artistic
life) is the complete relativity of all positions and styles of
procedure. The notion that there are no "absolutes" in art -- and in
the modern era, each valuable effort has been, in one way or another,
the highlighting and glorification of elements previous "off limits"
and rejected by the previous "classical" style.

Such a continual "reversal of values" has of course delivered us into
the current post-post modern era, in which fragmentation, surface
value and the complex weave of "sampling procedure" dominate, and "the
center does not hold".

I realize that my own artistic efforts have, in a small way,
contributed to the current aesthetic/emotional environment in which
the potential spiritual depth and complexity of evolved human
consciousness is trumped by the bedazzling shuffle of the shards of
inherited elements -- never before as available to the collective
consciousness. The resultant orientation towards "cultural relativity"
in the arts certainly comes in part from the psychic re-orientation
resulting from Einstein's bombshell dropped at the beginning of the
last century.

This current "relativity" of all artistic, philosophical, and
psychological values leaves the culture adrift, and yet there is no
"going back" in spite of what conservative thinkers often recommend.

At the very moment of our cultural origin, we were warned against
"eating from the tree of knowledge". Down through subsequent history,
one thing has led to another, until now -- here we are, sinking into
the quicksand of the ever-accelerating reversal of each latest value
(or artistic style). And yet -- there are many artists, like myself,
committed to the believe that -- having been "thrown by history" into
the dangerous trajectory initiated by the inaugural "eating from the
tree of knowledge" (a perhaps "fatal curiosity" programmed into our
genes) the only escape possible is to treat the quicksand of the
present as a metaphorical "black hole" through which we must pass --
indeed risking psychic destruction (or "banalization") -- for the
promise of emerging re-made, in new still unimaginable form, on the
other side.

This is the "heroic wager" the serious "experimental" artist makes in
living through the dangerous idea of radicalized relativity. It is
ironic, of course, that many of our greatest scientists (not all of
course) have little patience for the adventurous art of our times
(post Stockhausen/Boulez music, post Joyce/ Mallarme literature) and
seem to believe that a return to a safer "audience friendly" classical
style is the only responsible method for today's artists.

Do they perhaps feel psychologically threatened by advanced styles
that supercede previous principals of coherence? They are right to
feel threatened by such dangerous advances into territory for which
conscious sensibility if not yet fully prepared. Yet it is time for
all serious minds to "bite the bullet" of such forays into the unknown
world in which the dangerous quest for deeper knowledge leads
scientist and artist alike.
_________________________________________________________________

JOHN GOTTMAN
Psychologist; Founder of Gottman Institute; Author, The Mathematics of
Marriage
[gottman100.jpg]

Emotional intelligence

The most dangerous idea I know of is emotional intelligence. Within
the context of the cognitive neuroscience revolution in psychology,
the focus on emotions is extraordinary. The over-arching idea that
there is such a thing as emotional intelligence, that it has a
neuroscience, that it is inter-personal, i.e., between two brains,
rather than within one brain, are all quite revolutionary concepts
about human psychology. I could go on. It is also a revolution in
thinking about infancy, couples, family, adult development, aging,
etc.
_________________________________________________________________

PIET HUT
Professor of Astrophysics, Institute for Advanced Study, Princeton
[hut100.jpg]

A radical reevaluuation of the character of time

Copernicus and Darwin took away our traditional place in the world and
our traditional identity in the world. What traditional trait will be
taken away from us next? My guess is that it will be the world itself.

We see the first few steps in that direction in the physics,
mathematics and computer science of the twentieth century, from
quantum mechanics to the results obtained by Gödel, Turing and others.
The ontologies of our worlds, concrete as well as abstract, have
already started to melt away.

The problem is that quantum entanglement and logical incompleteness
lack the in-your-face quality of a spinning earth and our kinship with
apes. We will have to wait for the ontology of the traditional world
to unravel further, before the avant-garde insights will turn into a
real revolution.

Copernicus upset the moral order, by dissolving the strict distinction
between heaven and earth. Darwin did the same, by dissolving the
strict distinction between humans and other animals. Could the next
step be the dissolution of the strict distinction between reality and
fiction?

For this to be shocking, it has to come in a scientifically
respectable way, as a very precise and inescapable conclusion -- it
should have the technical strength of a body of knowledge like quantum
mechanics, as opposed to collections of opinions on the level of
cultural relativism.

Perhaps a radical reevaluation of the character of time will do it. In
everyday experience, time flows, and we flow with it. In classical
physics, time is frozen as part of a frozen spacetime picture. And
there is, as yet, no agreed-upon interpretation of time in quantum
mechanics.

What if a future scientific understanding of time would show all
previous pictures to be wrong, and demonstrate that past and future
and even the present do not exist? That stories woven around our
individual personal history and future are all just wrong? Now that
would be a dangerous idea.
_________________________________________________________________

DAN SPERBER
Social and cognitive scientist, CNRS, Paris; author, Explaining
Culture
[sperber100.jpg]
Culture is natural
A number of us -- biologists, cognitive scientists, anthropologists or
philosophers -- have been trying to lay down the foundations for a
truly naturalistic approach to culture. Sociobiologists and cultural
ecologists have explored the idea that cultural behaviors are
biological adaptations to be explained in terms of natural selection.
Memeticists inspired by Richard Dawkins argue that cultural evolution
is an autonomous Darwinian selection process merely enabled but not
governed by biological evolution.

Evolutionary psychologists, Cavalli-Sforza, Feldman, Boyd and
Richerson, and I are among those who, in different ways, argue for
more complex interactions between biology and culture. These
naturalistic approaches have been received not just with intellectual
objections, but also with moral and political outrage: this is a
dangerous idea, to be strenuously resisted, for it threatens
humanistic values and sound social sciences.
When I am called a "reductionist", I take it as a misplaced
compliment: a genuine reduction is a great scientific achievement,
but, too bad, the naturalistic study of culture I advocate does not to
reduce to that of biology or of psychology. When I am called a
"positivist" (an insult among postmodernists), I acknowledge without
any sense of guilt or inadequacy that indeed I don't believe that all
facts are socially constructed. On the whole, having one's ideas
described as "dangerous" is flattering.

Dangerous ideas are potentially important. Braving insults and
misrepresentations in defending these ideas is noble. Many advocates
of naturalistic approaches to culture see themselves as a group of
free-thinking, deep-probing scholars besieged by bigots.

But wait a minute! Naturalistic approaches can be dangerous: after
all, they have been. The use of biological evidence and arguments
purported to show that there are profound natural inequalities among
human "races", ethnic groups, or between women and men is only too
well represented in the history of our disciplines. It is not good
enough for us to point out (rightly) that 1) the science involved is
bad science,
2) even if some natural inequality were established, it would not come
near justifying any inequality in rights, and 3) postmodernists
criticizing naturalism on political grounds should begin by rejecting
Heidegger and other reactionaries in their pantheon who also have been
accomplices of policies of discrimination. This is not enough because
the racist and sexist uses of naturalism are not exactly unfortunate
accidents.

Species evolve because of genetic differences among their members;
therefore you cannot leave biological difference out of a biological
approach. Luckily, it so happens that biological differences among
humans are minor and don't produce sub-species or "races," and that
human sexual dimorphism is relatively limited. In particular, all
humans have mind/brains made up of the same mechanisms, with just
fine-tuning differences. (Think how very different all this would be
if -- however improbably -- Neanderthals had survived and developed
culturally like we did so that there really were different human
"races").

Given what anthropologists have long called "the psychic unity of the
human kind", the fundamental goal for a naturalistic approach is to
explain how a common human nature -- and not biological differences
among humans -- gives rise to such a diversity of languages, cultures,
social organizations. Given the real and present danger of distortion
and exploitation, it must be part of our agenda to take responsibility
for the way this approach is understood by a wider public.

This, happily, has been done by a number of outstanding authors
capable of explaining serious science to lay audiences, and who
typically have made the effort of warning their readers against
misuses of biology. So the danger is being averted, and let's just
move on? No, we are not there yet, because the very necessity of
popularizing the naturalistic approach and the very talent with which
this is being done creates a new danger, that of arrogance.

We naturalists do have radical objections to what Leda Cosmides and
John Tooby have called the "Standard Social Science Model." We have
many insightful hypotheses and even some relevant data. The truth of
the matter however is that naturalistic approaches to culture have so
far remained speculative, hardly beginning to throw light on just
fragments of the extraordinarily wide range of detailed evidence
accumulated by historians, anthropologists, sociologists and others.
Many of those who find our ideas dangerous fear what they see as an
imperialistic bid to take over their domain.

The bid would be unrealistic, and so is the fear. The real risk is
different. The social sciences host a variety of approaches, which,
with a few high profile exceptions, all contribute to our
understanding of the domain. Even if it involves some reshuffling, a
naturalistic approach should be seen as a particularly welcome and
important addition. But naturalists full of grand claims and promises
but with little interest in the competence accumulated by others are,
if not exactly dangerous, at least much less useful than they should
be, and the deeper challenge they present to social scientists' mental
habits is less likely to be properly met.
_________________________________________________________________

MARTIN E.P. SELIGMAN
Psychologist, University of Pennsylvania, Author, Authentic Happiness
[seligman100.jpg]

Relativism

In looking back over the scientific and artistic breakthroughs in the
20th century, there is a view that the great minds relativized the
absolute. Did this go too far? Has relativism gotten to a point that
it is dangerous to the scientific enterprise and to human well being?
The most visible person to say this is none other than Pope Benedict
XVI in his denunciations of the "dictatorship of the relative." But
worries about relativism are not only a matter of dispute in theology;
there are parallel dissenters from the relative in science, in
philosophy, in ethics, in mathematics, in anthropology, in sociology,
in the humanities, in childrearing, and in evolutionary biology.
Here are some of the domains in which serious thinkers have worried
about the overdoing of relativism:

o In philosophy of science, there is ongoing tension between the
Kuhnians (science is about "paradigms," the fashions of the current
discipline) and the realists (science is about finding the truth).
o In epistemology there is the dispute between the Tarskian
correspondence theorists ("p" is true if p) versus two relativistic
camps, the coherence theorists ("p" is true to the extent it
coheres with what you already believe is true) and the pragmatic
theory of truth ("p" is true if it gets you where you want to go).
o At the ethics/science interface, there is the fact/value dispute:
that science must and should incorporate the values of the culture
in which it arises versus the contention that science is and should
be value free.
o In mathematics, Gödel's incompleteness proof was widely
interpreted as showing that mathematics is relative; but Gödel, a
Platonist, intended the proof to support the view that there are
statements that could not be proved within the system that are true
nevertheless. Einstein, similarly, believed that the theory of
relativity was misconstrued in just the same way by the "man is the
measure of all things" relativists.
o In the sociology of high accomplishment, Charles Murray (Human
Accomplishment) documents that the highest accomplishments occur in
cultures that believe in absolute truth, beauty, and goodness. The
accomplishments, he contends, of cultures that do not believe in
absolute beauty tend to be ugly, that do not belief in absolute
goodness tend to be immoral, and that do not believe in absolute
truth tend to be false.
o In anthropology, pre-Boasians believed that cultures were
hierarchically ordered into savage, barbarian, and civilized,
whereas much of modern anthropology holds that all social forms are
equal. This is the intellectual basis of the sweeping cultural
relativism that dominates the humanities in academia.
o In evolution, Robert Wright (like Aristotle) argues for a scala
naturae, with the direction of evolution favoring complexity by its
invisible hand; whereas Stephen Jay Gould argued that the fern is
just as highly evolved as Homo sapiens. Does evolution have an
absolute direction and are humans further along that trajectory
than ferns?
o In child-rearing, much of twentieth century education was
profoundly influenced by the "Summerhillians" who argued complete
freedom produced the best children, whereas other schools of
parenting, education, and therapy argue for disciplined,
authoritative guidance.
o Even in literature, arguments over what should go into the canon
revolve around the absolute-relative controversy.
o Ethical relativism and its opponents are all too obvious
instances of this issue

I do not know if the dilemmas in these domains are only metaphorically
parallel to one another. I do not know if illumination in one domain
will not illuminate the others. But it might and it is just possible
that the great minds of the twenty-first century will absolutize the
relative.
_________________________________________________________________

HOWARD GARDNER
Psychologist, Harvard University; Author, Changing Minds
[gardner100.jpg]

Following Sisyphus, not Pandora
According to myth, Pandora unleashed all evils upon the world; only
hope remained inside the box. Hope for human survival and progress
rests on two assumptions: (1) Human constructive tendencies can
counter human destructive tendencies, and (2) Human beings can act on
the basis of long-term considerations, rather than merely short-term
needs and desires. My personal optimism, and my years of research on
"good work", could not be sustained without these assumptions.
Yet I lay awake at night with the dangerous thought that pessimists
may be right. For the first time in history -- as far as we know! --
we humans live in a world that we could completely destroy. The human
destructive tendencies described in the past by Thomas Hobbes and
Sigmund Freud, the "realist" picture of human beings embraced more
recently by many sociobiologists, evolutionary psychologists, and game
theorists might be correct; these tendencies could overwhelm any
proclivities toward altruism, protection of the environment, control
of weapons of destruction, progress in human relations, or seeking to
become good ancestors. As one vivid data point: there are few signs
that the unprecedented power possessed by the United States is being
harnessed to positive ends.
Strictly speaking, what will happen to the species or the planet is
not a question for scientific study or prediction. It is a question of
probabilities, based on historical and cultural considerations, as
well as our most accurate description of human nature(s). Yet, science
(as reflected, for example, in contributions to Edge discussions) has
recently invaded this territory with its assertions of a
biologically-based human moral sense. Those who assert a human moral
sense are wagering that, in the end, human beings will do the right
thing. Of course, human beings have the capacities to make moral
judgments -- that is a mere truism. But my dangerous thought is that
this moral sense is up for grabs -- that it can be mobilized for
destructive ends (one society's terrorist is another society's freedom
fighter) or overwhelmed by other senses and other motivations, such as
the quest for power, instant gratification, or annihilation of one's
enemies.
I will continue to do what I can to encourage good work -- in that
sense, Pandoran hope remains. But I will not look upon science,
technology, or religion to preserve life. Instead, I will follow
Albert Camus' injunction, in his portrayal of another mythic figure
endlessly attempting to push a rock up a hill: one should imagine
Sisyphus happy.


More information about the paleopsych mailing list