[Paleopsych] Edge Annual Question 2002: What Is Your Question? ... Why?

Premise Checker checker at panix.com
Tue Jan 17 18:48:29 UTC 2006


Edge Annual Question 2002: What Is Your Question? ... Why?
http://www.edge.org/q2002/question.02_print.html [links omitted]

"I can repeat the question, but am I bright enought to ask it?"
________________________________________________________________

The 5th Annual Edge Question reflects the spirit of the Edge motto:
"To arrive at the edge of the world's knowledge, seek out the most
complex and sophisticated minds, put them in a room together, and have
them ask each other the questions they are asking themselves."

The 2002 Edge Question is:

"WHAT IS YOUR QUESTION? ... WHY?"

I have asked Edge contributors for "hard-edge" questions, derived from
empirical results or experience specific to their expertise, that
render visible the deeper meanings of our lives, redefine who and what
we are. The goal is a series of interrogatives in which "thinking
smart prevails over the anaesthesiology of wisdom."

Happy New Year!

John Brockman
Publisher & Editor
[1.14.02] 
________________________________________________________________

Responses (in order received): Kevin Kelly o Paul Davies o Stuart A.
Kauffman o Alison Gopnik o John Horgan o Daniel C. Dennett o Derrick
De Kerkhove o Clifford A. Pickover o John McCarthy o Douglas Rushkoff
o William Calvin o Timothy Taylor o Marc D. Hauser o Roger Schank o
James J. O'Donnell o Robert Aunger o Lawrence Krauss o Jaron Lanier o
Freeman Dyson o Lance Knobel o Robert Sapolsky o Mark Stahlman o Andy
Clark o Sylvia Paull o Todd Feinberg, MD o Nicholas Humphrey o
Terrence Sejnowski o Howard Lee Morgan o Judith Rich Harris o Martin
Rees o Paul Bloom o Margaret Wertheim o George Dyson o Todd Siler o
Chris Anderson o Gerd Stern o Alan Alda o Henry Warwick o Delta Willis
o John Skoyles o Paul Davies o Piet Hut o Julian Barbour o Antony
Valentini o Stephen Grossberg o Rodney Brooks o Karl Sabbagh o David
G. Myers o John D. Barrow o Milford H. Wolpoff o Richard Dawkins o
David Deutsch o Joel Garreau o Gregory Benford o Eduardo Punset o Gary
F. Marcus o Steve Grand o Seth Lloyd o John Markoff o Michael Shermer
o Jordan B. Pollack o Steven R. Quartz o David Gelernter o Samuel
Barondes o Steven Pinker o Frank Schirrmacher o Leon Lederman o Howard
Gardner o Esther Dyson o Keith Devlin o Richard Nisbett o Stephen
Schneider o Robert Provine o Sir John Maddox o Carlo Rovelli o Tor
Nørretranders o David Buss o John Allen Paulos o Dan Sperber o W.
Daniel Hillis o Brian Eno o Anton Zeilinger o Eberhard Zangger o Mark
Hurst o Stuart Pimm o James Gilligan o Brian Greene o Rafael Núñez o
J. Doyne Farmer o Ray Kurzweil o Randolph Nesse o Adrian Scott o Tracy
Quan o Xeni Jardin o Stanislas Dehaene o Paul Ewald o George Lakoff o
David Berreby o Jared Diamond
________________________________________________________________

New Vordenker der "Dritten Kultur": Fragen für das Jahr 2002: "Wer
Nicht Fragt, Bleibt Dumm"
THOSE WHO DON'T ASK REMAIN DUMB
The haze of ignorance still has not disappeared: Whoever wants real
answers has to know what he's looking for -- A poll of scientists and
artists for the year 2002.

In a time when culture was still not numbered, the Count of Thüringen
invited his nobles to the "Singers' War at the Wartburg," where he
asked questions (if we are to believe Richard Wagner) that would bring
glory, the most famous of which queried, "Could you explain to me the
nature of love?" The publisher and literary agent, John Brockman, who
now organizes singers' wars on the Internet, enjoys latching on to
this tradition at the beginning of every year. (FAZ, January 9, 2001).
His Tannhäuser may be named Steven Pinker, and his Wolfram von
Eschenbach may go by Richard Dawkins, but it would do us well to trust
that they and their compatriots could also turn out speculation on the
count's favorite theme. Brockman's thinkers of the "Third Culture,"
whether they, like Dawkins, study evolutionary biology at Oxford or,
like Alan Alda, portray scientists on Broadway, know no taboos.
Everything is permitted, and nothing is excluded from this
intellectual game. But in the end, as it takes place in its own
Wartburg, reached electronically at www.edge.org, it concerns us and
our unexplained and evidently inexplicable fate. In this new year
Brockman himself doesn't ask, but rather once again facilitates the
asking of questions. The contributions can be found from today onwards
on the Internet. In conjunction with the start of the forum we are
printing a selection of questions and commentary, at times in somewhat
abridged form, in German translation. .... [click here]

F.A.Z. --Frankfurter Allgemeine Zeitung, 14.01.2002, Nr. 11 / Seite 38
________________________________________________________________

99 contributors
59,000 words
In order received
________________________________________________________________

"What is your heresy?"
I've noticed that the more scientifically educated a person is, the
more likely they will harbor a quiet heresy. This is a strongly held
belief that goes against the grain of their peers, something not in
the accepted cannon of their friends and colleagues. Often the person
finds it difficult to fully justify their own belief. It may or may
not be believed by others outside their circle, that doesn't matter.
What is important is that this view is not held by people they respect
and admire. It's become almost a game for me to uncover a person's
heresy because I've found that this unconventional view -- held with
much effort against the tide of their peer's views -- tells me more
about them than does the bulk of their well-thought out,
well-reasoned, and well argued conventional views. The more unexpected
   the belief is, the more I like them.

Kevin Kelly is Editor-At-Large for Wired Magazine and author of New
  Rules for the New Economy.
________________________________________________________________

"Universe or multiverse, that is the question?"
Of late, it is fashionable among leading physicists and cosmologists
to suppose that alongside the physical world we see lies a stupendous
array of alternative realities, some resembling our universe, others
very different. The multiverse theory comes in several varieties, but
in the most ambitious the "other universes" have different physical
laws. Only in a tiny fraction of universes will the laws come out just
right, by pure accident, for conscious beings such as ourselves to
emerge and marvel at how bio-friendly their world appears.

The multiverse has replaced God as an explanation for the appearance
of design in the structure of the physical world. Like God, the agency
concerned lies beyond direct observation, inferred by inductive
reasoning from the properties of the one universe we do see.

The meta-question is, does the existence of these other universes
amount to more than an intellectual exercise? Can we ever discover
that the hypothesized alternative universes are really there? If not,
is the multiverse not simply theology dressed up in techno jargon? And
finally, could there be a Third Way, in which the ingenious features
of the universe are explained neither by an Infinite Designer Mind,
nor by an Infinite Invisible Multiverse, but by an entirely new
principle of explanation.

Paul Davies, a physicist, writer and broadcaster, now based in South
Australia, is author of How to Build a Time Machine.
________________________________________________________________

"What must a physical system be to be able to act on its own behalf?"
In or ordinary life, we ascribe action and doing to other humans, and
lower organisms, even bacteria swimming up a glucose gradient to get
food. Yet physics has no "doings" only happenings, and the bacterium
is just a physical system. I have struggled with the question "What
must a physical system be to be able to act on its own behalf?" Call
such a system an autonomous agent. I may have found an answer, such
systems must be able to replicate and do a thermodynamic work cycle.
But of course I'm not sure of my answer. I am sure the question is of
fundamental importance, for all free living organisms are autonomous
agents, and with them, doing, not just happenings, enters the
universe. We do manipulate the universe on our own behalf. Is there a
better definition of autonomous agents? And what does their existence
mean for science, particularly physics?

Stuart A. Kauffman, an emeritus professor of biochemistry at UPenn, is
a theoretical biologist and author of Investigations.
________________________________________________________________

"Why do we ask questions?" 
We all take for granted the fact that human beings ask questions and
seek explanations, and that the questions they ask go far beyond their
immediate practical concerns. But this insatiable human curiosity is
actually quite puzzling. No other animal devotes as much time, energy
and brain area to the pursuit of knowledge for its own sake. Why? Is
this drive for explanation restricted to the sophisticated
professional questioners on this site? Or is it a deeper part of human
nature?

Developmental research suggests that this drive for explanation is, in
fact, in place very early in human life. We've all experienced the
endless "whys?" of three-year-olds and the downright dangerous
two-year-old determination to seek out strange new worlds and boldly
go where no toddler has gone before. More careful analyses and
experiments show that children's questions and explorations are
strategically designed, in quite clever ways, to get the right kind of
answers. In the case of human beings, evolution seems to have
discovered that it's cost-effective to support basic research, instead
of just funding directed applications. Human children are equipped
with extremely powerful learning mechanisms, and a strong intrinsic
drive to seek explanations. Moreover, they come with a support staff,
-- parents and other caregivers -- who provide both lunch and
references to the results of previous generations of human
researchers.

But this preliminary answer prompts yet more questions. Why is it that
in adult life, the same quest for explanatory truth so often seems to
be satisfied by the falsehoods of superstition and religion? (Maybe we
should think of these institutions as the cognitive equivalent of fast
food. Fast food gives us the satisfying tastes of fat and sugar that
were once evolutionary markers of good food sources, without the
nourishment. Religion gives us the illusion of regularity and order,
evolutionary markers of truth, without the substance.)

Why does this intrinsic truth-seeking drive seem to vanish so
dramatically when children get to school? And, most important, how is
it possible for children to get the right answers to so many questions
so quickly? What are the mechanisms that allow human children to be
the best learners in the known universe? Answering this question would
not only tell us something crucial about human nature, it might give
us new technologies that would allow even dumb adults to get better
answers to our own questions.

Alison Gopnik is a professor of psychology at the University of
California at Berkeley and coauthor of The Scientist In The Crib.
________________________________________________________________

"Do we want the God machine?"
The God machine is the name that journalists have given to a device
invented by the Canadian psychologist Michael Persinger. It consists
of a bunch of solenoids that, when strapped around the head, deliver
pulses of electromagnetic radiation to specific regions of the brain.
Persinger claims he can induce mystical visions by stimulating the
temporal lobes, which have also been linked to religious experiences
by other scientists, notably V.S. Ramachandran of the University of
California at San Diego.

Persinger's machine is actually quite crude. It induces peculiar
perceptual distortions but no classic mystical experiences. But what
if, through further advances in neuroscience and other fields,
scientists invent a God machine that actually works, that delivers
satori, nirvana, to anyone on command, without any negative side
effects? It doesn't have to be an electromagnetic brain-stimulating
device. It could be a drug, a type of brain surgery, a genetic
modification, or some combination thereof.

One psychedelic researcher recently suggested to me that enlightenment
could be spread around the world by an infectious virus that boosts
the brain's production of dimethyltryptamine, a endogenous psychedelic
that the Nobel laureate Julius Axelrod of the National Institutes of
Health detected in trace amounts in human brain tissue in 1972. But
whatever form the God machine takes, it would be powerful enough to
transform the world into what Robert Thurman, an authority on Tibetan
Buddhism (and father of Uma), calls the "Buddhaverse," a mystical
utopia in which everyone is enlightened.

The obvious followup question: Would the invention of a genuine God
machine spell our salvation or doom?

John Horgan is a freelance writer and author of The Undiscovered Mind.
________________________________________________________________

"What kind of system of 'coding' of semantic information does the
brain use?"
My question now is actually a version of the question I was asking
myself in the first year, and I must confess that I've had very little
time to address it properly in the intervening years, since I've been
preoccupied with other, more tractable issues. I've been mulling it
over in the back of my mind, though, and I do hope to return to it in
earnest in 2002.

What kind of system of "coding" of semantic information does the brain
use? We have many tantalizing clues but no established model that
comes close to exhibiting the molar behavior that is apparently being
seen in the brain. In particular, we see plenty of evidence of a
degree of semantic localization -- neural assemblies over here are
involved in cognition about faces and neural assemblies over there are
involved in cognition about tools or artifacts, etc -- and yet we also
have evidence (unless we are misinterpreting it) that shows the
importance of "spreading activation," in which neighboring regions are
somehow enlisted to assist with currently active cognitive projects.
But how could a region that specializes in, say, faces contribute at
all to a task involving, say, food, or transportation or . . . . ? Do
neurons have two (or more) modes of operation -- specialized, "home
territory" mode, in which their topic plays a key role, and
generalized, "helping hand" mode, in which they work on other regions'
topics?

Alternatively, is the semantic specialization we have observed an
illusion -- are these regions only circumstantially implicated in
these characteristic topics because of some as-yet-unanalyzed
generalized but idiosyncratic competence that happens to be invoked
usually when those topics are at issue? (The mathematician's phone
rings whenever the topic is budgets, but he knows nothing about money;
he's just good at arithmetic.) Or, to consider another alternative, is
"spreading activation" mainly just noisy leakage, playing no
contributing role in the transformation of content? Or is it just
"political" support, contributing no content but helping to keep
competing projects suppressed for awhile? And finally, the properly
philosophical question: what's wrong with these questions and what
would better questions be?

Daniel C. Dennett is Distinguished Arts and Sciences Professor at
Tufts University and author of Darwin's Dangerous Idea.
________________________________________________________________

"'To be or not to be' remains the question"
The fact is that is "To be or not to be" is both a simple, perhaps the
simplest, and a complex question, the hardest to sustain, let alone to
ask. I ask it myself often -- maybe as many times as five or six a
week -- and it is the asking, not any hope for an answer, that yields
the most searing and immediate insight. I don't get it right every
time, but when I do, I am thrown for a split second at the other side
of being, the place where it begins.

But I can never retain that amazing feeling for long. What is required
is a kind of radical pull-back of oneself from the most banal evidence
of life and reality. Jean-Paul Sartre, after Shakespeare, was probably
the thinker who framed the question best in his novels and
philosophical treatises. The issue, however, is that this question is
profoundly existential, not merely philosophical. It can be asked and
should be by any living, thinking, sentient being, but cannot be
answered.
There is huge energy and cognitive release to expect from it when it
is properly framed. You have to somehow imagine that everything,
absolutely everything has disappeared, or never was, that you have
just happened upon your own circumstances by accident, the first
accident of being. Another approach is to imagine sharply that
anything that is, is a result of a warp, a blip in nothingness. It is
not even a matter of finding out why or how, those demands are already
far too elaborate. It is a crude, raw, brutal question followed by
absolute, lightening speed amazement. And then the ordinary
familiarity of all things known and named takes over, slipping your
whole being into the stream of life, of being, with its attending
problems and felicities. I feel strongly that there is a fundamental
need for Shakespeare's question in every day life, but that is not
what you and I were taught in school.

Derrick de Kerckhove is Director of the McLuhan Program at the
University of Toronto and author of Connected Intelligence.
________________________________________________________________

"Would you choose universe Omega or Upsilon?"
Consider two universes. Universe Omega is a universe in which God does
not exist, but the inhabitants of the universe believe God exists.
Universe Upsilon is a universe in which God does exist, but no
inhabitant believes God exists. In which universe would you prefer to
live? In which universe do you think most people would prefer to live?
I recently posed this question to scientists, philosophers, and lay
people. Some respondents suggested that if people think God exists,
then God is sufficiently "real." A few individuals suggested that
people would behave more humanely in a Universe where people believed
in God. Yet others countered that an ethical system dependent on faith
in a watchful, omniscient, or vengeful God is fragile and prone to
collapse when doubt begins to undermine faith. A fuller listing of
responses is in the book.
To me, the biggest challenge to answering this question is
understanding what is meant by "God." Scientists sometimes think of
God as the God of mathematical and physical laws and the underpinnings
of the universe. Other people believe in a God who intervenes in our
affairs, turns water into wine, answers prayers, and smites the
wicked. The Koran implies that God lives outside of time, and, thus,
our brains are not up to the task of understanding Him. Some
theologians have suggested that only especially sensitive individuals
can glimpse God, but us ordinary folk shouldn't deny His existence in
the same way that a blind man shouldn't deny the existence of a
rainbow. In modern times, many scientists ponder the amazing panoply
of chemical and physical constants that control the expansion of the
universe and seem tuned to permit the formation of stars and the
synthesis of carbon-based life.
Questions about God's omniscience are particularly mind-numbing, yet
we can still ask if it is rational to believe in an omniscient God. As
Steven J. Brams points out in his book Superior Beings, "The
rationality of theistic belief is separate from its truth -- a belief
need not be true or even verifiable to be rational." However, if we
posit the existence of an omniscient God, His omniscience may require
him to know the history of all quarks in the universe, the states of
all electrons, the vibrations of every string, and the ripples of the
quantum foam. Is this the same God, who in Exodus 21 gave Moses laws
describing when one should stone an ox to death? Is the God of Gluons
and Galaxies the same God concerned with Israeli oxen dung?
But what about the Bible itself? Today, the Bible -- especially the
Old Testament -- may serve as an alternate reality device. It gives
its readers a glimpse of other ways of thinking and of other worlds.
It is also the most mysterious book ever written. We don't know the
ratio of myth to history. We don't know all the authors. We are not
always sure of the intended message. We don't fully understand the Old
Testament's Nephilim or its Bridegroom of Blood. We only know that
that the Bible reflects some of humankind's most ancient and deep
feelings. For some unknown reason, it is a bell that has resonated
through the centuries. It lets us reach across cultures, see visions,
and better understand what we have held sacred. Because the Bible is a
hammer that shatters the ice of our unconscious, it thus provides one
of many mechanisms in our quest for transcendence.

Clifford A. Pickover is a researcher at IBM's T. J. Watson Research
Center and author of The Paradox of God and the Science of
Omniscience.
________________________________________________________________

"How are behaviors encoded in DNA?"
Many animals have have quite substantial hereditary behavior.
Moreover, these behaviors are subject to evolution on fairly short
time scales, so they probably have straightforward DNA encodings on
which mutations can act. Mostly the behaviors seem to be sequences of
actions, but perhaps there are some of the form "do X until Y is
true".

John McCarthy is Professor of Computer Science at Stanford University.
________________________________________________________________

"Why do we tell stories?"
Why a story?
Human beings can't help but understand their world in terms of
narratives. Although the theory of evolution effectively dismantled
our creationist myths over a century ago, most thinking humans still
harbor an attachment to the notion that we were put here, with
purpose, by something. New understandings of emergence, as well as new
tools for perceiving the order underlying chaos, seem to the hold the
promise for a widescale liberation from the constructed myths we use
to organize our experience, as well as the dangers that
over-dependence on such narratives bring forth. At least I hope so.

At the very least, narratives are less dangerous when we are free to
participate in their writing. I'll venture that it is qualitatively
better for human beings to take an active role in the unfolding of our
collective story than it is to adhere blindly to the testament of our
ancestors or authorities.

But what of moving out of the narrative altogether? Is it even
possible? Is our predisposition for narrative physiological,
psychological, or cultural?

Is it an outmoded form of cognition that yields only bloody clashes
when competing myths are eventually mistaken for irreconcilable
realities? Or are stories the only way we have of interpreting our
world -- meaning that the forging of a collective set of mutually
tolerant narratives is the only route to a global civilization?

Douglas Rushkoff is a Professor of Media Culture at New York
University's Interactive Telecommu-nications Program and author of
Coercion: Why We Listen to What "They" Say.
________________________________________________________________

"Eureka: What makes coherence so important to us?"
When something is missing, it bothers us that things don't hang
together. Consider: "Give him." In any language, that is a bothersome
sentence. Something essential is missing, and it rings an alarm bell
in our brains. We go in search of an implied "what" and try to guess
what will make the words all hang together into a complete thought.
We ask questions in search of satisfying incompletes, again hoping to
create some coherence. No other animal does such things. It even forms
the basis of many of our recreations such as jigsaw and crossword
puzzles, all those little eurekas along the way.
Guessing a hidden pattern fascinates us. It's part of our pleasure in
complex ritual or listening to Bach, to be able to guess what comes
next some of the time. It's boring when it is completely predictable,
however; it's the search for how things all hang together that is so
much fun. Of course, we make a lot of mistakes. Every other winter, I
get fooled into thinking that a radio has been left on, somewhere in
the house, and I go in search of it -- only to realize that it was
just the wind whistling around the house. My brain tried to make
coherence out of chaos by trying out familiar word patterns on it.
Astrology, too, seems to make lots of things "all hang together."
Often in science, we commit such initial errors but we are now fairly
systematic about discovering and discarding them. We go on to find
much better explanations for how things hang together. Finding
coherence is one of our great pleasures. It would be nice to know what
predisposes our brain to seek out hidden coherence.
For one thing, it might help illuminate the power of an idea -- and
with it, how fanaticism works.
Fundamentalist schemes that seem to make everything hang together can
easily override civilization's prohibitions against murder. Inferring
an enveloping coherence can create an "other" who is outside the
bounds of "us." Because it seems so whole, so right, it may become
okay to beat up on unbelievers -- say, fans of an opposing football
team, or of another religion.
For scientists and crossword fans, it's finding the coherence that is
important. Then we move on. But many people, especially in the
generation which follows its inventors, get trapped by a seemingly
coherent worldview. Things get set in concrete; the coherent framework
provides comfort, but it also creates dangerous us-and-them
boundaries.

William Calvin is a theoretical neurobiologist at the University of
Washington and author of How Brains Think.
________________________________________________________________

"Is morality relative or absolute?"
Humans spread out from a common origin into many different global
environments. It was a triumph of our unique adaptability, for we
display the broadest range of behaviours -- nutritional, social,
sexual and reproductive - of any animal. We also have classes of
behaviour -- religious, scientific, artistic, gendered, and
philosophical, each underpinned by special languages -- that animals
lack. Paradoxically, success also came through conformity.
Prehistorians track archaeological cultures by recognizing the
physical symbolic codes (art styles, burial rites, settlement layouts)
that channelled local routines. Each culture constrained diversity and
could punish it with ostracism and death. Isolation bred idiosyncracy,
and there was a shock when we began regional reintegration. Early
empires created state religions which, although sometimes refracting
species-wide instincts for a common-good, tended to elevate chosen
peoples and their traditional ways.
Now we can monitor all of our cultures there is a need to adjudicate
on conduct at a global level. But my question is not understood in the
same way by everyone. To fundamentalists, it is heretical, because
morality is God-given. Social theorists, on the other hand, often
interpret absolute morality as imperialist --no more than local ethics
metastasized by (for example) the United Nations. But appeals to
protect cultural diversity are typically advanced without regard to
the reality of individual suffering in particular communities. A third
position, shared by many atheistic scientists and traditional
Marxists, is based on ideas of utility, happiness and material truth:
what is right is understood as being what is good for the species. But
no one agrees on what this is, or how competing claims for access to
it should be settled.

The 'ethics of care', first developed within feminist philosophy,
moves beyond these positions. Instead of connecting morals either to
religious rules and principles or reductive natural laws, it values
shared human capacities, such as intimacy, sympathy, trust, fidelity,
and compassion. Such an ethics might elide the distinction between
relative and absolute by promoting species-wide common sense. Before
we judge the prospect of my question vanishing as either optimistic or
naïve, we must scrutinize the alternatives carefully.

Timothy Taylor is an archaeologist at University of Bradford, UK, and
author of The Prehistory of Sex: Four Million Years of Human Sexual
Culture.
________________________________________________________________

"How will the sciences of the mind constrain our theories and policies
of education?"
In several recent meetings that I have attended, I have been
overwhelmed by the rift between what the sciences of mind, brain and
behavior have uncovered over the past decade, and both how and what
science educators teach.
In many arenas, educators hold on to a now dated view of the child's
cognitive development, failing to appreciate the innate biases that
our species has been equipped with. These biases constrain not only
what the child can learn, but when it might most profitably learn such
things. Take, for instance, the acquisition of mathematical knowledge.
Educators aim for the acquisition of precise computations. There is
now, however, evidence for an innately available approximate number
system, one that operates spontaneously without education.

One might imagine that if educators attempted to push this system
first -- teaching children that 40 is a better answer to 25 + 12 than
is 60 -- that it might well facilitate the acquisition of the more
precise system later in development. Similar issues arise in
attempting to teach children about physics and biology. At some level,
then, there must be a way for those in the trenches to work together
with those in the ivory tower to advance the process of learning,
building on what we have discovered from the sciences of the mind.

Marc D. Hauser is an evolutionary psychologist, a professor at Harvard
University and author of Wild Minds: What AnimalsThink. 
________________________________________________________________

"What does it mean to have an educated mind in the 21st century?"

While education is on every politician's agenda as an item of serious
importance, it is astonishing that the notion of what it means to be
educated never seems to come up. Our society, which is undergoing
massive transformations almost on a daily basis never seems to
transform its notion of what it means to be educated. We all seem to
agree that an educated mind certainly entails knowing literature and
poetry, appreciating history and social issues, being able to deal
with matters of economics, being versatile in more than one language,
understanding scientific principles and the basics of mathematics.

What I was doing in my last sentence was detailing the high school
curriculum set down in 1892 by a committee chaired by the President of
Harvard that was mandated for anyone who might want to enter a
university. The curriculum they decided upon has not changed at all
since then. Our implicit notions of an educated mind are the same as
they were in the nineteenth century. No need to teach anything new, no
need to reconsider how a world where a university education was
offered solely to the elite might be different from a world in which a
university degree is commonplace.

For a few years, in the early 90's, I was on the Board of Editors of
the Encyclopedia Britannica. Most everyone else on the board were
octogenarians -- the foremost of these, since he seemed to have
everyone's great respect, was Clifton Fadiman, a literary icon of the
40's. When I tried to explain to this board the technological changes
that were about to come that would threaten the very existence of the
Encyclopedia, there was a general belief that technology would not
really matter much. There would always be a need for the encyclopedia
and the job of the board would always be to determine what knowledge
was the most important to have. Only Clifton Fadiman seemed to realize
that my predictions about the internet might have some effect on the
institution they guarded. He concluded sadly, saying: "I guess we will
just have to accept the fact that minds less well educated than our
own will soon be in charge."

Note that he didn't say "differently educated," but "less well
educated." For some years the literati have held sway over the
commonly accepted definition of education. No matter how important
science and technology seem to industry or government or indeed to the
daily life of the people, as a society we believe that those educated
in literature and history and other humanities are in some way better
informed, more knowing, and somehow more worthy of the descriptor
"well educated."

Now if this were an issue confined to those who run the elite
universities and prep schools or those whose bible is the New York
Review of Books, this really wouldn't matter all that much to anybody.
But this nineteenth century conception of the educated mind weighs
heavily on our notions of how we educate our young. We are not
educating our young to work or to live in the nineteenth century, or
at least we ought not be doing so. Yet, when universities graduate
thousands of English and history majors because it can only be because
we imagine that such fields form the basis of the educated mind. When
we choose to teach our high schoolers trigonometry instead of say
basic medicine or business skills, it can only be because we think
that trigonometry is somehow more important to an educated mind or
that education is really not about preparation for the real world.
When we focus on intellectual and scholarly issues in high school as
opposed to more human issues like communications, or basic psychology,
or child raising, we are continuing to rely upon out dated notions of
the educated mind that come from elitist notions of who is to be
educated.

While we argue that an educated mind can reason, but curiously there
are no courses in our schools that teach reasoning. When we say that
an educated mind can see more than one side of an argument we go
against the school system which holds that there are right answers to
be learned and that tests can reveal who knows them and who doesn't.

Now obviously telecommunications is more important than basic
chemistry and HTML is more significant than French in today's world.
These are choices that have to be made, but they never will be made
until our fundamental conception of erudition changes or until we
realize that the schools of today must try to educate the students who
actually attend them as opposed to the students who attended them in
1892.

The 21st century conception of an educated mind is based upon old
notions of erudition and scholarship not germane to this century. The
curriculum of the school system bears no relation to the finished
products we seek. We need to rethink what it means to be educated and
begin to focus on a new conception of the very idea of education.

Roger Schank is Distinguished Career Professor, School of Computer
Science, Carnegie-Mellon University and author of Virtual Learning: A
Revolutionary Approach to Building a Highly Skilled Workforce.
________________________________________________________________

"Do the benefits accruing to humankind (leaving aside questions of
afterlife) from the belief and practice of organized religions
outweigh the costs?"
Given the political sensitivities of the topic, it is hard to imagine
that a suitably rigorous attempt to answer this question could be
organized or its results published and discussed soberly, but it is
striking that there is no serious basis on which to conduct such a
conversation. Religion brings peace and solace to many; religion kills
people, divides societies, diverts energy and resources. How to assess
the net impact in some meaningfully quantitative way? Even to imagine
the possibility of such an inquiry and to think through some of the
categories you would use could be very enlightening.

James J. O'Donnell is Professor of Classical Studies and Vice Provost
at UPenn and author of Avatars of the Word: From Papyrus to
Cyberspace. 
________________________________________________________________

"Is technology going to 'wake up' or 'come alive' anytime in the
future?"
Bill Joy, the prominent computer scientist, argued in a Wired article
last year that "the future doesn't need us" because other creatures,
artificial or just post-human, are going to take over the world in the
21st century. He is worried that various technologies -- particularly
robotics, genetic engineering and nanotechnology -- are soon going to
be capable of generating either a self-conscious machine (something
like the Internet "waking up") or one capable of self-replication
(nanotechnologists inspired by the vision of Eric Drexler are
currently attempting to create a nano-scaled "universal assembler").
If either of these events came to pass, it would surely introduce
major changes in the planetary ecology, and humans would have to find
a new role to play in such a world. But is Joy right? Do we have to
worry about mad scientists producing some invention that inadvertently
renders us second-class citizens to machines in the next couple of
decades? (Joy is so distraught by this prospect he would have everyone
stop working in these areas.)

This is a difficult question to answer, mostly because we don't
currently have a very good idea about how technology evolves, so it's
hard to predict future developments. But I believe that we can get
some way toward an answer by adopting an approach currently being
developed by some of our best evolutionary thinkers, such as John
Maynard Smith, Eors Szathmary, and others. This "major transition"
theory is concerned with determining the conditions under which new
kinds of agents emerge in some evolutionary lineage. Examples of such
transitions occurred when prokaryotes became eukaryotes, or
single-celled organisms became multi cellular. In each case,
previously independent biological agents evolved new methods of
cooperation, with the result that a new level of organization and
agency appeared in the world. This theory hasn't yet been applied to
the evolution of technology, but could help to pinpoint important
issues. In effect, what I want to investigate is whether the futures
that disturb Bill Joy can be appropriately analyzed as major
transitions in the evolution of technology. Given current trends in
science and technology, can we say that a global brain is around the
corner, or that nano-robots are going to conquer the Earth? That, at
least, is my current project.

Robert Aunger is an evolutionary theorist and editor of Darwinizing
Culture: The Status of Memetics as a Science. 
________________________________________________________________

"Was there any choice in the creation of the Universe?"

Here I paraphrase Einstein's famous question: "Did God have any choice
in the creation of the Universe". I get rid of the God part, which
Einstein only added to make it seem more whimsical, I am sure, because
that just confuses the issue. The important question, perhaps the most
important question facing physics today is the question of whether
there is only one consistent set of physical laws that allow a working
universe, or rather whether the constants of nature are arbitrary, and
could take any set of values. Namely, if we continue to probe into the
structure of matter and the nature of elementary forces will we find
that mathematical consistency is possible only for one unique theory
of the Universe, or not? In the former case, of course, there is hope
for an exactly predictive "theory of everything". In the latter case,
we might expect that it is natural that our Universe is merely one of
an infinite set of Universes within some grand multiverse, in each of
which the laws of physics differ, and in which anthropic arguments may
govern why we live in the Universe we do.

The goal of physics throughout the ages has been to explain exactly
why the universe is the way it is, but as we push closer and closer to
the ultimate frontier, we may find out that in fact the ultimate laws
of nature may generically produce a universe that is quite different
from the one we live in. This would force a dramatic shift in our
concept of natural law.

Some may suggest that this question is mere philosophical nonsense,
and is akin to asking how many angels may sit on the head of a pin.
However, I think that if we are lucky it may be empirically possible
to address it. If, for example, we do come up with some fundamental
theory that predicts the values of many fundamental quantities
correctly, but that predicts that other mysterious quantities, like
the energy of empty space, is generically different than the value we
measure, or perhaps is determined probabilistically, this will add
strong ammunition to the notion that our universe is not unique, but
arose from an ensemble of causally disconnected parts, each with
randomly varying values of the vacuum energy.

In any case, answerable or not, I think this is the ultimate question
in science.

Lawrence Krauss is Professor of Physics at Case Western Reserve
University and the author of Atom.
________________________________________________________________

"How much can we handle?"

We've got fundamental scientific theories (such as quantum theory and
relativity) that test out superbly, even if we don't quite know how
they all fit into a whole, but we're hung up trying to understand
complicated phenomena, like living things. How much complexity can we
handle?
We ought to be able to use computers to model complicated things, but
we can't as yet write software that's complicated enough to take
advantage of the ever-bigger computers we are learning to build.
Complexity, side effects, legacy. How much can we handle? That's the
question of the new century.
There's a social variant of the same problem:
In the twentieth century we become powerful enough to destroy
ourselves, but we seemed to be able to handle that. Now technology and
information flow have improved to the point that a small number of us
might be able to destroy us all. Can we handle that?
Jaron Lanier, computer scientist and musician, is currently the lead
scientist for the National Tele-Immersion Initiative.
________________________________________________________________

"Why am I me?"
This question was asked by my eight-year-old grandson George. In eight
letters it summarizes the conundrum of personal existence in an
impersonal universe. How does it happen that a couple of liters of
grey matter organizes itself into the unique stream of self-awareness
that calls itself George? If we could answer this question, we would
be on the way toward an understanding of brain structure and function
at a deep level. We would probably have in our hands the key to a more
rational and discriminating treatment of mental illnesses. We might
also have the key to the design of a genuine artificial intelligence.
Every human being must have asked this question in one way or another.
For most of us, the question expresses only a general philosophical
curiosity about our place in the order of nature. But for George the
question has a more specific technical meaning. He has an identical
twin brother Donald, and he understands the distinction between
monozygotic and fraternal twins. He knows that he and Donald not only
have the same genes but also have the same environment and upbringing.
When George asks the question, he is asking how it happens that two
people with identical genes and identical nurture are nevertheless
different. What are the non-genetic and non-environmental processes in
the brain that cause George to be George and cause Donald to be
Donald? If we could answer this question, we would have a powerful new
tool for the investigation of cognitive development. The conventional
wisdom says that mental differences between George and Donald arise
from local randomness of neural connections, undetermined either by
genes or by sensory input. But to say that the connections are random
only means that we do not yet understand how they came about.

Freeman Dyson is professor of physics at the Institute for Advanced
Study and author of The Sun, the Genome, and the Internet.
________________________________________________________________

"Do we want to live in one world, or two?"

One of the great achievements of recent history has been a dramatic
reduction in absolute poverty in the world. In 1820 about 85% of the
world's population lived on the equivalent of a dollar a day
(converted to today's purchasing power). By 1980, that percentage had
dropped to 30%, but it is now down to 20%.

But that still means 1 billion people live in absolute poverty. A
further 2 billion are little better off, living on $2 a day. A quarter
of the world's people never get a cup of clean water.

Part of what globalisation means is that we have a reasonable chance
of assuring that a majority of the world's people will benefit from
continuing economic growth, improvements in health and education, and
the untapped potential of the extraordinary technologies about which
most of the Edge contributors write so eloquently.

We currently lack the political will to make sure that a vast number
of people are not fenced off from this optimistic future. So my
question poses a simple choice. Are we content to have two,
increasingly estranged world? Or do we want to find the path to a
unified, healthy world?

Lance Knobel is Adviser, Prime Minister's Forward Strategy Unit,
London, and the former head of the program of the World Economic
Forums' Annual meeting in Davos.
________________________________________________________________

"What's the neurobiology of doing good and being good?"

I've spent most of my career as a neurobiologist working on an area of
the brain called the hippocampus. It's a fairly useful region -- it
plays a critical role in learning and memory. It's the area that's
damaged in Alzheimer's, in alcoholic dementia, during prolonged
seizures or cardiac arrest. You want to have your hippocampus
functioning properly. So I've spent all these years trying to figure
out why hippocampal neurons die so easily and what you can do about
it. That's fine, might even prove useful some day. But as of late,
it's been striking me that I'm going to be moving in the direction of
studying a part of the brain called the prefrontal cortex (PFC).

It's a fascinating part of the brain, the part of the brain that most
defines us as humans. There's endless technical ways to describe what
the PFC does, but as an informal definition that works pretty well,
it's the closest thing we have to a superego. The PFC is what allows
us to become potty trained early on. And it is responsible for
squeezing our psychic sphincters closed as well. It keeps us from
belching loudly at the quiet moment in the wedding ceremony, prevents
us from telling our host just what we really think of the inedible
meal they've served. It keeps us from having our murderous thoughts
turn into murderous acts. And it plays a similar role in the cognitive
realm -- the PFC stops us from falling into solving a problem with an
answer that, while the easier, more reflexive one, is wrong. The PFC
is what makes us do the right thing, even if it's harder.

Not surprisingly, it's one of the last parts of the brain to fully
develop (technical jargon -- to fully myelinate). But what is
surprising is just how long it is before the PFC comes fully on line
-- astonishingly, around age 30. And this is where my question comes
in. It is best framed in the context of young kids, and this is
probably what has prompted me to begin to think about the PFC, as I
have two young children. Kids are wildly "frontally disinhibited," the
term for having a PFC that hasn't quite matured yet into keeping its
foot firmly on the brake. Play hide and seek with a three year old,
loudly, plaintively call, "Where are you," and their lack of frontal
function does them in -- they can't stop themselves from calling out
-- Here I am, under the table -- giving away their hiding spot. I
suspect that there is a direct, near linear correlation between the
number of fully myelinated frontal neurons in a small child's brain
and how many dominoes you can line up in front of him before he must
MUST knock them over.

So my question comes to the forefront in a scenario that came up
frequently for me a few years ago: my then three year old who, while a
wonderful child, was distinctly three, would do something reasonably
appalling to his younger sister -- take some stuffed animal away, grab
some contested food item, whatever. A meltdown then ensues. My wife or
I intervene, strongly reprimanding our son for mistreating his sister.
And then the other parent would say, "Well, is this really fair to be
coming down on him like this?, after all, he has no frontal function
yet, he can't stop himself" (my wife is a neuropsychologist so,
pathetically, we actually speak this way to each other). And the other
would retort -- "Well, how else is he going to develop that frontal
function?"

That's the basic question -- how does the world of empathy, theory of
mind, gratification postponement, Kohlberg stages of moral
development, etc., combine with the world of neurotrophic growth
factors stimulating neurons to grow fancier connections? How do they
produce a PFC that makes you do the harder thing because it's right?
How does this become a life-long pattern of PFC function

Robert Sapolsky is a professor of biological sciences at Stanford
University and author of A Primate's Memoir.
________________________________________________________________

"Is humanity in the midst of a cognitive 'Fourth-Transition?' Or, why
doesn't the Encyclopedia Brittanica matter any more?"

It feels to me like something very important is going on. Clearly our
children aren't quite like us. They don't learn about the world as we
did. They don't storehouse knowledge about the world as we have. They
don't "sense" the world as we do. Could humanity possibly already be
in the middle of a next stage of cognitive transition?

Merlin Donald has done a fine job of summarizing hundreds of inquiries
into the evolution of culture and cognition in his Origins of the
Modern Mind. Here, as in his other work, he posits a series of
"layered" morphological, neurological and external technological
stages in this evolutionary path. What he refers to as the "Third
Transition" (from "Mythic" to "Theoretic" culture), appears to have
begun 2500 (or so) years ago and has now largely completed its march
to "mental" dominance worldwide.

While this last "transition" did not require biological adaptation (or
speciation), it nonetheless changed us -- neurologically and
psycho-culturally. The shift from the "primary orality" of "Mythic
culture" to the literacy and the reliance of what Donald calls an
"External Symbolic Storage" network, has resulted in a new sort of
mind. The "modern" mind.

Could we be "evolving" towards an even newer sort of mind as a result
of our increasing dependence on newer sorts of symbolic networks and
newer environments of technologies?

Literacy (while still taught and used) doesn't have anywhere near the
clout it once had. Indeed, as fanatical "literalism" (aka
"fundamentalism") thrashes its way to any early grave (along with the
decline of the reciprocal fascination of the past 50 years to
"deconstruct" everything as "texts"), how much will humanity care
about and rely upon the encyclopedic storage of knowledge in
alphabetic warehouses?

Perhaps we are already "learning," "knowing" and "sensing" the world
in ways that presage something very different from the "modern" mind.
Should we ask the children?

Mark Stahlman, a venture capitalist who has been focused on next
generation computer/networking platforms, is co-founder the Newmedia
Laboratory, NYNMA.
________________________________________________________________

"What are minds, that they are both essentially mental yet
inextricably intertwined with body (and world)?"

We thought we had this one nailed. Believing (rightly) that the
physical world is all there is, the sciences of the mind re-invented
thought and reason (and feeling) as information-processing events in
the human brain. But this vision turns out to be either incomplete or
fatally flawed. The neat and tidy division between a level of
information processing (software) and of physicality (implementation)
is useful when we deal with humanly engineered systems. We build such
systems, as far as possible, to keep the levels apart. But nature was
not guided by any such neat and tidy design principles. The ways that
evolved creatures solve problems of anticipation, response, reasoning
and perceiving seem to involve endless leakage and interweaving
between motion, action, visceral (gut) response, and somewhat more
detached contemplation. When we solve a jigsaw puzzle, we look, think,
and categorise: but we also view the scene and pieces from new angles,
moving head and body. And we pick pieces up and try them out. Real
on-the-hoof human reason is like that through and through. Even the
use of pen and paper to construct arguments displays the same complex
interweaving of embodied action, perceptual re-encountering, and
neural activity. Mind and body (and world) emerge as messily and
continuously coupled partners in the construction of rational action.

But this leads to a very real problem, an impasse that is currently
the single greatest roadblock in the attempts to construct a mature
science of the mind. We cannot, despite the deep and crucial roles of
body and world, understand the mind in quite the same terms as, say,
an internal combustion engine. Where minds are concerned, it is the
flow of contents (and feelings) that seems to matter. Yet if we
prescind from the body and world, pitching our stories and models at
the level of the information flows, we again lose sight of the
distinctively human mind. We need the information-and-content based
story to see the mind as, precisely, a mind. Yet we cannot do justice
to minds like ours without including body, world (cognitive tools and
other people) and motion in roles which are both genuinely cognitive
yet thoroughly physical.

What we lack is a framework, picture, or model in terms of which to
understand this larger system as the cognitive engine. All current
stories are forced to one side (information flows) or the other
(physical dynamics). Cognitive Science thus stands in a position
similar to that of Physics in the early decades of the 20th century.
What we lack is a kind of 'quantum theory' of the mind: a new
framework that displays mind as mind, yet as body in action too.

Andy Clark is Professor of Philosophy and Cognitive Science at the
University of Sussex, UK and the author of Being There: Putting Brain,
Body and World Together Again.
________________________________________________________________

"At what age should women say, 'No,' to first-time pregnancy?"

Scientific advances now make it possible for a woman past normal
child-bearing years to bear a child. Some of my high-tech friends who
range from age 43 to almost 50 are either bearing children or plan to
using in-vitro techniques. These women have postponed childbearing
because of their careers, but they want to experience the joys of
family that their male counterparts were able to share while still
pursuing their professional goals -- an option far more difficult for
the childbearer and primary care provider.

Many successful men start first, second, or third families later in
their lives, so why should we criticize women who want to bear a first
child, when, thanks to science, it is no longer "too late?"

Sylvia Paull is the founder of Gracenet (www.gracenet.net).
________________________________________________________________

"What is the relationship between being alive and having a mind?"

Last year, Steven Spielberg directed a film, based upon a Stanley
Kubrick project, entitled "A.I. Artificial Intelligence". The film
depicts a robotic child who develops human emotions. Is such a thing
possible? Could a sufficiently complex and appropriately designed
computer embody human emotions? Or is this simply a fanciful notion
that the public and some scientists who specialize in artificial
intelligence just wish could be true?

I don't think that computers will ever become conscious and I view
Spielberg's depiction of a conscious feeling robot a good example of
what might be called the "The Spielberg Principle" that states: When a
Steven Spielberg film depicts a world-changing scientific event, the
likelihood of that event actually occurring approaches zero." In other
words, our wishes and imagination often have little to do with what is
scientifically likely or possible. For example, although we might wish
for contact with other beings in the universe as portrayed in the
Spielberg movie "E.T", the astronomical distances between our solar
system and the rest of the universe makes an E.T.-like visit extremely
unlikely.

The film A.I. and the idea contained within it that robots could
someday become conscious is another case in which our wishes exceed
reality. Despite enormous advances in artificial intelligence, no
computer is able to experience a pin prick like a simple frog, or get
hungry like a rat, or become happy or sad like all of us carbon-based
units. But why is this the case? It is my conjecture that this is
because there are some features of being alive that makes mind,
consciousness, and feelings possible. That is, only living things are
capable of the markers of mind such as intentionality, subjectivity,
and self-awareness. But the important question of the link between
life and the creation of consciousness remains a great scientific
mystery, and the answer will go a long way toward our understanding of
what a mind actually is.

Todd E. Feinberg, MD is Chief, Yarmon Neurobehavior and Alzheimer's
Disease Center, Beth Israel Medical Center
________________________________________________________________

"To be or not to be?"
Old questions don't go away (at least while they remain unanswered).
Suppose Edge were to have asked Hamlet for his Y 2002 question We can
guess the answer. "Sorry, John, I know it's a bit of a cliché, but
it's the same question it has always been." Suppose Edge turned next
to Albert Camus. "John, I said it in 1942 and I'm still waiting.
'There is but one truly serious philosophical problem and that is
suicide. Judging whether life is or is not worth living amounts to
answering the fundamental question of philosophy. All the rest --
whether or not the world has three dimensions, whether the mind has
nine or twelve categories -- comes afterwards.'"
Clichés they may be.  But I'd say there's every reason for students of
human nature to continue to treat these questions with due
seriousness: and in particular to think further about who has been
asking them,  when, and why, and with what consequences.  It may seem
a paradox that human beings should have evolved to have a love-hate
relationship with their own existence. But in fact there may be a
simple Darwinian story to be told about how it has come to be  so.
Let's accept the stark truth that individual human beings have been
designed by natural selection to be, in Dawkins' famous phrase,
"survival machines" whose primary function is to help the genes they
carry to make it into future generations. We should admit, then, that,
from this evolutionary viewpoint, an  individual human life cannot be
considered an end in itself but only a means to promoting the success
of genes.

Yet the fact is that in the human case (and maybe the human case
alone) natural selection has devised a peculiarly effective trick  for
persuading individual survival machines to fulfill this seemingly
bleak role. Every human being is endowed  with the mental programs for
developing a "conscious self" or "soul": a soul which not only  values
its own survival but  sees itself as very much an end in its own
right  (in fact a soul which, in a fit of solipsism,  may even
consider itself the one and only source of all the ends there are!).
Such  a soul, besides doing all it can to ensure its own basic comfort
and security, will typically strive for self-development: through
learning, creativity, spiritual growth, symbolic expression,
consciousness-raising, and so on. These activities redound to the
advantage of mind and body. The result is that such "selfish souls" do
indeed make wonderful agents for "selfish genes".

There has, however,  always been a catch.  Naturally-designed
"survival machines" are not, as the name might imply machines designed
to go on and on surviving: instead they are machines designed to
survive only up to a point --  this  being the point where the genes
they carry have nothing more to gain (or even things to lose) from
continued life.  For it"s a sobering fact that genes are generally
better off taking passage and propagating themselves in younger
machines than older ones (the older ones will have begun to accumulate
defects, to have become set in their ways, to have acquired more than
enough dependents, etc.) It suits  genes therefore that their survival
machines should have a limited life-time, after which they can be
scrapped.

Thus,  in a scenario that has all the makings of tragedy (if not a
tragic farce),  natural selection has, on the one hand,  been shaping
up individual human beings at the level of their souls to believe in
themselves and their intrinsic worth, while on the other hand taking
steps to ensure that these same individuals on the level of their
bodies grow old and die --  and, most likely, since by this stage of a
life the genes no longer have any interest in preventing it,  to die
miserably, painfully and in a state of dreadful disillusion.

However,  here's the second catch. In order for this double-game that
the genes are playing to be successful, it's essential that the soul
they've designed does not see what's coming and realise the extent to
which it has been duped, at least until too late. But this means
preventing the soul, or at any rate cunningly diverting it,  from
following some of the very lines of inquiry on which it has been set
up  to place its hopes: looking to the future, searching for eternal
truths, and so on. In Camus' words "Beginning to think is beginning to
be undermined".

The history of human psychology and culture has revolved around this
contradiction built into human nature. Science has not had much to say
about it. But it may yet.

Nicholas Humhprey is a theoretical psychologist at the London School
of Economics, and author of Leaps of Faith.
________________________________________________________________

"Why Sleep?"
We need to sleep every day. Why do we spend a third of our lives in a
dormant state? Sleep deprivation leads to loss of judgment, failure of
health, and eventually to death. The cycle of sleep and alertness is
controlled by circadian rhythms, which also affect body temperature,
digestion and other regulatory systems. Despite the importance of
sleep its purpose is a mystery.

The brain remains highly active during sleep, so the simple
explanation that we sleep in order to rest cannot be the whole story.
Activity in the sleeping brain is largely hidden from us because very
little that occurs during sleep directly enters consciousness.
However, electrical recordings and more recently brain imaging
experiments during slow-wave sleep have revealed highly ordered
patterns of activity that are much more spatially and temporally
coherent than brain activity during states of alertness. Slow-wave
sleep alternates during the night with rapid eye sleep movement (REM)
sleep, during which dreams occur and muscles are paralyzed. For the
last 10 years my colleagues and I have been building computer models
of interacting neurons that can account for rhythmic brain activity
during sleep.

Computer models of the sleeping brain and recent experimental evidence
point toward slow-wave sleep as a time during which brain cells
undergo extensive structural reorganization. It takes many hours for
the information acquired during the day to be integrated into
long-term memory through biochemical reactions. Could it be that we go
to sleep every night in order to remember better and think more
clearly?

Introspection is misleading in trying to understand the brain in part
because much of the processing that takes place to support seeing,
hearing and decision-making is subconscious. In studying the brain
during sleep when we are aware of almost nothing, we may get a better
understanding of the brain's secret life and uncover some of the
elusive principles that makes the mind so illusive.

Terrence Sejnowski, a computational neurobiologist and Professor at
the Salk Institute for Biological Studies, is a coauthor of
Thalamocortical Assemblies: How Ion Channels, Single Neurons and
Large-Scale Networks Organize Sleep Oscillations.
________________________________________________________________

"What makes a genius, and how can we have more of them?"
As any software developer will tell you, one great programmer is
easily worth ten average ones. The great strides in knowledge have
most often come from those we label "genius." Newton, Gauss, Einstein,
Feyneman, de Morgan, Crick all seemed to be able to make connections
or see patterns that others had ignored. They often visualized the
world differently, or with fewer constraints than most of us have on
our imagination. There are many great problems of science and society
to be solved, and applying genius to them could help speed the
solutions.
Perhaps the analysis of Einstein's brain done by Professor Diamond at
Berkeley, which seems to show differences in structure in the inferior
parietal region, and a higher proportion of glial cells can lead to
some physiological answers. Perhaps there are chemical enhancers which
can be used (legally, one would hope), to increase oxygen flow to
neurons. Perhaps behavioral conditioning when we're young can help
create more of the right type of structures, just as musicians who
being training in early childhood have larger portions of the brain
devoted to their skills.
Whatever the answer, mankind might be better for some more genius
directed at the environmental, social and scientific fields.

Howard Morgan is Vice-Chairman, Idealab.
________________________________________________________________

"Why do people -- even identical twins -- differ from one another in
personality?"
This question needs to be asked because of the widely held conviction
that we already know the answer to it. We don't. Okay, we know half of
the answer: one of the reasons why people differ from each other is
that they have different genes. That's the easy half.
The hard half is the part that isn't genetic. Even people who have
identical genes, like Freeman Dyson's twin grandsons (see his
question), differ in personality. I am not asking about the feeling
each twin has of being "me": George and Donald could be identical in
personality, and yet each could have a sense of me-ness.

But if George and Donald are like most identical twins, they aren't
identical in personality. Identical twins are more alike than
fraternal twins or ordinary siblings, but less alike than you would
expect. One might be more meticulous than the other, or more outgoing,
or more emotional. The weird thing is that the degree of similarity is
the same, whether twins are reared together or apart. George and
Donald, according to their grandfather, "not only have the same genes
but also have the same environment and upbringing." And yet they are
no more alike in personality than twins reared by two different sets
of parents in two different homes.

We know that something other than genes is responsible for some of the
variation in human personality, but we are amazingly ignorant about
what it is and how it works. Well-designed research has repeatedly
failed to confirm commonly held beliefs about which aspects of a
child's environment are important. The evidence indicates that neither
those aspects of the environment that siblings have in common (such as
the presence or absence of a caring father) nor those that supposedly
widen the differences between siblings (such as parental favoritism or
competition between siblings) can be responsible for the non-genetic
variation in personality. Nor can the vague idea of an "interaction"
between genes and environment save the day. George and Donald have the
same genes, so how can an interaction between genes and environment
explain their differences?

Only two hypotheses are compatible with the existing data. One, which
I proposed in my book The Nurture Assumption, is that the crucial
experiences that shape personality are those that children have
outside their home. Unfortunately, there is as yet insufficient
evidence to support (or disconfirm) this hypothesis.

The remaining possibility is that the unexplained variation in
personality is random. Even for reared-together twins, there are
minor, random differences in their experiences. I find it
implausible, however, that minor, random differences in experiences
could be so potent, given the ineffectiveness of substantial,
systematic differences. If randomness affects personality, the way it
probably works is through biological means -- not genetic but
biological. The human genome is smallish and the human brain is vast;
the genome couldn't possibly contain precise specifications for every
neuron and synapse. Identical twins don't have identical brains for
the same reason that they don't have identical freckles or
fingerprints.

If these random physical differences in the brain are responsible for
some or all of the personality differences between identical twins,
they must also be responsible for some or all of the non-genetic
variation in personality among the rest of us. "All" is highly
unlikely; "some" is almost certainly true. What remains in doubt is
not whether, but how much.

The bottom line is that scientists will probably never be able to
predict human behavior with anything close to certainty. Next
question: Is this discouraging news or cause for celebration?

Judith Rich Harris is a developmental psychologist and author of The
Nurture Assumption: Why Children Turn Out The Way They Do. 
________________________________________________________________

"Many Universes?"

Preliminaries

We do not know whether there are other universes. Perhaps we never
shall. But I want to respond to Paul Davies' questions by arguing that
"do other universes exist?" can be a genuine scientific question.
Moreover, I shall outline why it is an interesting question; and why,
indeed, I already suspect that the answer may be "yes".

First, a pre-emptive and trivial comment: if you define the universe
as "everything there is", then by definition there cannot be others. I
shall, however, follow the convention among physicists and
astronomers, and define the "universe" as the domain of space-time
that encompasses everything that astronomers can observe. Other
"universes", if they existed, could differ from ours in size, content,
dimensionality, or even in the physical laws governing them.

It would be neater, if other "universes" existed, to redefine the
whole enlarged ensemble as "the universe", and then introduce some new
term -- for instance "the metagalaxy" -- for the domain that
cosmologists and astronomers have access to. But so long as these
concepts remain so conjectural, it is best to leave the term
"universe" undisturbed, with its traditional connotations, even though
this then demands a new word, the "multiverse", for a (still
hypothetical) ensemble of "universes."

Ontological Status Of Other Universes
Science is an experimental or observational enterprise, and it's
natural to be troubled by assertions that invoke something inherently
unobservable. Some might regard the other universes as being in the
province of metaphysics rather than physics. But I think they already
lie within the proper purview of science. It is not absurd or
meaningless to ask "Do unobservable universes exist?", even though no
quick answer is likely to be forthcoming. The question plainly can't
be settled by direct observation, but relevant evidence can be sought,
which could lead to an answer.

There is actually a blurred transition between the readily observable
and the absolutely unobservable, with a very broad grey area in
between. To illustrate this, one can envisage a succession of
horizons, each taking us further than the last from our direct
experience:

(i) Limit of present-day telescopes

There is a limit to how far out into space our present-day
instruments can probe. Obviously there is nothing fundamental about
this limit: it is constrained by current technology. Many more
galaxies will undoubtedly be revealed in the coming decades by
bigger telescopes now being planned. We would obviously not demote
such galaxies from the realm of proper scientific discourse simply
because they haven't been seen yet. When ancient navigators
speculated about what existed beyond the boundaries of the then
known world, or when we speculate now about what lies below the
oceans of Jupiter's moons Europa and Ganymede, we are speculating
about something "real" -- we are asking a scientific question.
Likewise, conjectures about remote parts of our universe are
genuinely scientific, even though we must await better instruments
to check them.

(ii) Limit in principle at present era

Even if there were absolutely no technical limits to the power of
telescopes, our observations are still bounded by a horizon, set by
the distance that any signal, moving at the speed of light, could
have travelled since the big bang. This horizon demarcates the
spherical shell around us at which the redshift would be infinite.
There is nothing special about the galaxies on this shell, any more
than there is anything special about the circle that defines your
horizon when you're in the middle of an ocean. On the ocean, you
can see farther by climbing up your ship's mast. But our cosmic
horizon can't be extended unless the universe changes, so as to
allow light to reach us from galaxies that are now beyond it. If
our universe were decelerating, then the horizon of our remote
descendants would encompass extra galaxies that are beyond our
horizon today. It is, to be sure, a practical impediment if we have
to await a cosmic change taking billions of years, rather than just
a few decades (maybe) of technical advance, before a prediction
about a particular distant galaxy can be put to the test. But does
that introduce a difference of principle? Surely the longer
waiting-time is a merely quantitative difference, not one that
changes the epistemological status of these faraway galaxies?

(iii) Never-observable galaxies from "our" Big Bang,

But what about galaxies that we can never see, however long we
wait? It's now believed that we inhabit an accelerating universe.
As in a decelerating universe, there would be galaxies so far away
that no signals from them have yet reached us; but if the cosmic
expansion is accelerating, we are now receding from these remote
galaxies at an ever-increasing rate, so if their light hasn't yet
reached us, it never will. Such galaxies aren't merely unobservable
in principle now -- they will be beyond our horizon forever. But if
a galaxy is now unobservable, it hardly seems to matter whether it
remains unobservable for ever, or whether it would come into view
if we waited a trillion years. (And I have argued, under (ii)
above, that the latter category should certainly count as "real".)

(iv) Galaxies in disjoint universes

The never-observable galaxies in (iii) would have emerged from the
same Big Bang as we did. But suppose that, instead of
causally-disjoint regions emerging from a single Big Bang (via an
episode of inflation) we imagine separate Big Bangs. Are
space-times completely disjoint from ours any less real than
regions that never come within our horizon in what we'd
traditionally call our own universe? Surely not -- so these other
universes too should count as real parts of our cosmos, too.

This step-by-step argument (those who don't like it might dub it a
slippery slope argument!) suggests that whether other universes exist
or not is a scientific question. But it is of course speculative
science. The next question is, can we put it on a firmer footing? What
could it explain?

Scenarios For A Multiverse

At first sight, nothing seems more conceptually extravagant -- more
grossly in violation of Ockham's Razor -- than invoking multiple
universes. But this concept is a natural consequence of several
different theories ( albeit all speculative). Andrei Linde, Alex
Vilenkin and others have performed computer simulations depicting an
"eternal" inflationary phase where many universes sprout from separate
big bangs into disjoint regions of spacetimes. Alan Guth and Lee
Smolin have, from different viewpoints, suggested that a new universe
could sprout inside a black hole, expanding into a new domain of space
and time inaccessible to us. And Lisa Randall and Raman Sundrum
suggest that other universes could exist, separated from us in an
extra spatial dimension; these disjoint universes may interact
gravitationally, or they may have no effect whatsoever on each other.

There could be another universe just a few millimetres away from us.
But if those millimetres were measured in some extra spatial dimension
then to us (imprisoned in our 3-dimensional space) the other universe
would be inaccessible. In the hackneyed analogy where the surface of a
balloon represents a two-dimensional universe embedded in our
three-dimensional space, these other universes would be represented by
the surfaces of other balloons: any bugs confined to one, and with no
conception of a third dimension, would be unaware of their
counterparts crawling around on another balloon. Variants of such
ideas have been developed by Paul Steinhardt, Neil Turok and others.
Guth and Edward Harrison have even conjectured that universes could be
made in some far-future laboratory, by imploding a lump of material to
make a small black hole. Could our entire universe perhaps then be the
outcome of some experiment in another universe? If so, the theological
arguments from design could be resuscitated in a novel guise. Smolin
speculates that the daughter universe may be governed by laws that
bear the imprint of those prevailing in its parent universe. If that
new universe were like ours, then stars, galaxies and black holes
would form in it; those black holes would in turn spawn another
generation of universes; and so on, perhaps ad infinitum.

Parallel universes are also invoked as a solution to some of the
paradoxes of quantum mechanics, in the "many worlds" theory, first
advocated by Hugh Everett and John Wheeler in the 1950s. This concept
was prefigured by Olaf Stapledon, in his 1937 novel, as one of the
more sophisticated creations of his Star Maker: "Whenever a creature
was faced with several possible courses of action, it took them all,
thereby creating many ... distinct histories of the cosmos. Since in
every evolutionary sequence of this cosmos there were many creatures
and each was constantly faced with many possible courses, and the
combinations of all their courses were innumerable, an infinity of
distinct universes exfoliated from every moment of every temporal
sequence". None of these scenarios has been simply dreamed up out of
the air: each has a serious, albeit speculative, theoretical
motivation. However, one of them, at most, can be correct. Quite
possibly none is: there are alternative theories that would lead just
to one universe. Firming up any of these ideas will require a theory
that consistently describes the extreme physics of ultra-high
densities, how structures on extra dimensions are configured, etc. But
consistency is not enough: there must be grounds for confidence that
such a theory isn't a mere mathematical construct, but applies to
external reality. We would develop such confidence if the theory
accounted for things we can observe that are otherwise unexplained. As
the moment, we have an excellent framework, called the standard model,
that accounts for almost all subatomic phenomena that have been
observed. But the formulae of the "standard model" involve numbers
which can't be derived from the theory but have to be inserted from
experiment.

Perhaps, in the 21st-century theory, physicists will develop a theory
that yields insight into (for instance) why there are three kinds of
neutrinos, and the nature of the nuclear and electric forces. Such a
theory would thereby acquire credibility. If the same theory, applied
to the very beginning of our universe, were to predict many big bangs,
then we would have as much reason to believe in separate universes as
we now have for believing inferences from particle physics about
quarks inside atoms, or from relativity theory about the unobservable
interior of black holes.

Universal Laws, Or Mere Bylaws?
"Are the laws of physics unique?" is a less poetic version of
Einstein's famous question "Did God have any choice in the creation of
the Universe?" The answer determines how much variety the other
universes -- if they exist -- might display. If there were something
uniquely self-consistent about the actual recipe for our universe,
then the aftermath of any big bang would be a re-run of our own
universe. But a far more interesting possibility (which is certainly
tenable in our present state of ignorance of the underlying laws) is
that the underlying laws governing the entire multiverse may allow
variety among the universes. Some of what we call "laws of nature" may
in this grander perspective be local bylaws, consistent with some
overarching theory governing the ensemble, but not uniquely fixed by
that theory.

As an analogy (one which I owe to Paul Davies) consider the form of
snowflakes. Their ubiquitous six-fold symmetry is a direct consequence
of the properties and shape of water molecules. But snowflakes display
an immense variety of patterns because each is moulded by its
micro-environments: how each flake grows is sensitive to the
fortuitous temperature and humidity changes during its downward drift.
If physicists achieved a fundamental theory, it would tell us which
aspects of nature were direct consequences of the bedrock theory (just
as the symmetrical template of snowflakes is due to the basic
structure of a water molecule) and which are (like the distinctive
pattern of a particular snowflake) the outcome of accidents. The
accidental features could be imprinted during the cooling that follows
the big bang -- rather as a piece of red-hot iron becomes magnetised
when it cools down, but with an alignment that may depend on chance
factors. It may turn out (though this would be a disappointment to
many physicists if it did) that the key numbers describing our
universe, and perhaps some of the so-called constants of laboratory
physics as well, are mere "environmental accidents", rather than being
uniquely fixed throughout the multiverse by some final theory. This is
relevant to some now-familiar arguments (explored further in my book
Our Cosmic Habitat) about the surprisingly fine-tuned nature of our
universe.

Fine Tuning -- A Motivation For Suspecting That Our "Universe" Is One
Of Many.

The nature of our universe depended crucially on a recipe encoded in
the big bang, and this recipe seems to have been rather special. A
degree of fine tuning -- in the expansion speed, the material content
of the universe, and the strengths of the basic forces -- seems to
have been a prerequisite for the emergence of the hospitable cosmic
habitat in which we live. Here are some prerequisites for a universe
containing organic life of the kind we find on Earth:

First of all, it must be very large compared to individual particles,
and very long-lived compared with basic atomic processes. Indeed this
is surely a requirement for any hypothetical universe that a science
fiction writer could plausibly find interesting. If atoms are the
basic building blocks, then clearly nothing elaborate could be
constructed unless there were huge numbers of them. Nothing much could
happen in a universe that was was too short-lived: an expanse of time,
as well as space, is needed for evolutionary processes. Even a
universe as large and long-lived as ours, could be very boring: it
could contain just black holes, or inert dark matter, and no atoms at
all; it could even be completely uniform and featureless. Moreover,
unless the physical constants lie in a rather narrow range, there
would not be the variety of atoms required for complex chemistry.

If our existence depends on a seemingly special cosmic recipe, how
should we react to the apparent fine tuning? There seem three lines to
take: we can dismiss it as happenstance; we can acclaim it as the
workings of providence; or (my preference) we can conjecture that our
universe is a specially favoured domain in a still vaster multiverse.
Some seemingly "fine tuned" features of our universe could then only
be explained by "anthropic" arguments, which are analogous to what any
observer or experimenter does when they allow for selection effects in
their measurements: if there are many universes, most of which are not
habitable, we should not be surprised to find ourselves in one of the
habitable ones.

Testing Specific Multiverse Theories Here And Now
We may one day have a convincing theory that tells us whether a
multiverse exists, and whether some of the so called laws of nature
are just parochial by-laws in our cosmic patch. But while we're
waiting for that theory -- and it could be a long wait -- the "ready
made clothes shop" analogy can already be checked. It could even be
refuted: this would happen if our universe turned out to be even more
specially tuned than our presence requires. Let me give two quite
separate examples of how this style of reasoning can be used to refute
specific hypotheses.

(i) Ludwig Boltzmann argued that our entire universe was an
immensely rare "fluctuation" within an infinite and eternal
time-symmetric domain. There are now many arguments against this
hypothesis, but even when it was proposed one could already have
noted that fluctuations in large volumes are far more improbable
than in smaller volumes.

So, it would be overwhelmingly more likely, if Boltzmann were
right, that we would be in the smallest fluctuation compatible with
our existence (Indeed, the most probable fluctuation would be a
disembodied brain that merely simulated the sensations of the
external world.) Whatever our initial assessment of Boltzmann's
theory, its probability would plummet if we came to accept the
extravagant scale of the cosmos.

(ii) Even if we knew nothing about how stars and planets formed, we
would not be surprised to find that our Earth's orbit wasn't highly
eccentric: if it had been, water would boil when the Earth was at
perihelion and freeze at aphelion -- a harsh environment
unconducive to our emergence. However, a modest orbital
eccentricity (certainly up to 0.1) is plainly not incompatible with
life. If it had turned out that the earth moved in a near-perfect
circle (with eccentricity, say, less than 0.00001) , this would be
a strong argument against a theory that postulated anthropic
selection from orbits whose eccentricities had a "Bayesian prior"
that was uniform in the range from zero to one.

We could apply this style of reasoning to the important numbers of
physics (for instance, the cosmological constant lambda) to test
whether our universe is typical of the subset that that could harbour
complex life. Lambda has to be below a threshold to allow
protogalaxies to pull themselves together by gravitational forces
before gravity is overwhelmed by cosmical repulsion (which happens
earlier if lambda is large). An unduly fierce cosmic repulsion would
prevent galaxies from forming.

Suppose, for instance, that (contrary to current indications) lambda
was thousands of times smaller than it needed to be merely to ensure
that galaxy formation wasn't prevented. This would raise suspicions
that it was indeed zero for some fundamental reason. (Or that it had a
discrete set of possible values, and all the others were well about
the threshold).

The methodology requires us to decide what values of a particular
physical parameter are compatible with our emergence. It also requires
a specific theory that gives the relative Bayesian priors for any
particular value. For instance, in the case of lambda, are all values
equally probable? Are low values favoured by the physics? Or is there
a finite number of discrete possible values, depending on how the
extra dimensions "roll up"? With this information, one can then ask if
our actual universe is "typical" of the subset in which we could have
emerged. If it is a grossly atypical member even of this subset (not
merely of the entire multiverse) then we would need to abandon our
hypothesis. By applying similar arguments to the other numbers, we
could check whether our universe is typical of the subset that that
could harbour complex life. If so, the multiverse concept would be
corroborated.

As another example of how "multiverse" theories can be tested,
consider Smolin's conjecture that new universes are spawned within
black holes, and that the physical laws in the daughter universe
retain a memory of the laws in the parent universe: in other words
there is a kind of heredity. Smolin's concept is not yet bolstered by
any detailed theory of how any physical information (or even an arrow
of time) could be transmitted from one universe to another. It has,
however, the virtue of making a prediction about our universe that can
be checked. If Smolin were right, universes that produce many black
holes would have a reproductive advantage, which would be passed on to
the next generation. Our universe, if an outcome of this process,
should therefore be near-optimum in its propensity to make black
holes, in the sense that any slight tweaking of the laws and constants
would render black hole formation less likely. (I personally think
Smolin's prediction is unlikely be borne out, but he deserves our
thanks for presenting an example that illustrates how a multiverse
theory can in principle be vulnerable to disproof.) These examples
show that some claims about other universes may be refutable, as any
good hypothesis in science should be. We cannot confidently assert
that there were many big bangs -- we just don't know enough about the
ultra-early phases of our own universe. Nor do we know whether the
underlying laws are "permissive": settling this issue is a challenge
to 21st century physicists. But if they are, then so-called anthropic
explanations would become legitimate -- indeed they'd be the only type
of explanation we'll ever have for some important features of our
universe.

A Keplerian Argument
The multiverse concept might seem arcane, even by cosmological
standards, but it affects how we weigh the observational evidence in
some current debates. Our universe doesn't seem to be quite as simple
as it might have been. About 5 percent of its mass is in ordinary
atoms; about 25 percent is in dark matter (probably a population of
particles that survived from the very early universe contains atoms,
and dark matter; and the remaining 70 percent is latent in empty space
itself.

Some theorists have a strong prior preference for the simplest
universe and are upset by these developments. It now looks as thought
a craving for such simplicity will be disappointed. Perhaps we can
draw a parallel with debates that occurred 400 years ago. Kepler
discovered that planets moved in ellipses, not circles. Galileo was
upset by this. In his "Dialogues concerning the two chief systems of
the world" he wrote "For the maintenance of perfect order among the
parts of the Universe, it is necessary to say that movable bodies are
movable only circularly".

To Galileo, circles seemed more beautiful; and they were simpler --
they are specified just by one number, the radius, whereas an ellipse
needs an extra number to define its shape (the "eccentricic"). Newton
later showed, however, that all elliptical orbits could be understood
by a single unified theory of gravity. Had Galileo still been alive
when Principia was published, Newton's insight would surely have
joyfully reconciled him to ellipses.

The parallel is obvious. A universe with at least three very different
ingredients low may seem ugly and complicated. But maybe this is our
limited vision. Our Earth traces out just one ellipse out of an
infinity of possibilities, its orbit being constrained only by the
requirement that it allows an environment conducive for evolution (not
getting too close to the Sun, nor too far away). Likewise, our
universe may be just one of an ensemble of all possible universes,
constrained only by the requirement that it allows our emergence. So
I'm inclined to go easy with Occam's razor: a bias in favour of
"simple" cosmologies may be as short-sighted as was Galileo's
infatuation with circles.

What we've traditionally called "the universe" may be the outcome of
one big bang among many, just as our Solar System is merely one of
many planetary systems in the Galaxy. Just as the pattern of ice
crystals on a freezing pond is an accident of history, rather than
being a fundamental property of water, so some of the seeming
constants of nature may be arbitrary details rather than being
uniquely defined by the underlying theory. The quest for exact
formulas for what we normally call the constants of nature may
consequently be as vain and misguided as was Kepler's quest for the
exact numerology of planetary orbits. And other universes will become
part of scientific discourse, just as "other worlds" have been for
centuries. We may one day have a convincing theory that accounts for
the very beginning of our universe, tells us whether a multiverse
exists, and (if so) whether some so called laws of nature are just
parochial by-laws in our cosmic patch. may be vastly larger than the
domain we can now (or, indeed, can ever) observe. Most physicists hope
to discover a fundamental theory that will offer unique formulae for
all the constants of nature. But perhaps what we've traditionally
called our universe is just an atom in an ensemble -- a multiverse
punctuated by repeated big bangs, where the underlying physical laws
permit diversity among the individual universes.

Even though some physicists still foam at the mouth at the prospects
of be being "reduced" to these so-called anthropic explanations, such
explanations may turn out to be the best we can ever discover for some
features of our universe (just as they are the best explanations we
can offer for the shape and size of Earth's orbit). Cosmology will
have become more like the science of evolutionary biology. Nonetheless
(and here physicists should gladly concede to the philosophers), any
understanding of why anything exists -- why there is a universe (or
multiverse) rather than nothing -- remains in the realm of
metaphysics.

Sir Martin Rees, a cosmologist, is Royal Society Professor at Kings
College, Cambridge. He directs a research program at Cambridge's
Institute of Astronomy. His most recent book is Our Cosmic Habitat.
________________________________________________________________

"How will people think about the soul?"
Cognitive scientists believe that emotions, memories, and
consciousness are the result of physical processes. But almost nobody
else does. Common sense tells us that our mental life is the product
of an immaterial soul, one that can survive the destruction of the
body and brain. The physical basis of thought is, as Francis Crick put
it, "an astonishing hypothesis", one that few take seriously.

You might think that this will soon change. After all, people once
thought the earth is flat and that mental illness is caused by demonic
possession. But the belief in the immaterial soul is different. It is
rooted in our experience -- our gut feeling, after all, is not that we
are bodies; it is that we occupy them. Even young children are
dualists -- they appreciate and enjoy tales in which a person leaves
his body and goes to faraway lands, or when the frog turns into a
prince. And when they come to think about death, they readily accept
that the soul lives on, drifting into another body or ascending to
another world.

When the public hears about research into the neural basis of thought,
they learn about specific findings: this part of the brain is involved
in risk taking, that part is active when someone think about music,
and so on. But the bigger picture is not yet generally appreciated,
and it is an interesting question how people will react when it is.
(We are seeing the first signs now, much of it in the recent work of
novelists such Jonathan Franzen, David Lodge, and Ian McEwan). It
might be that non-specialists will learn to live with the fact that
their gut intuitions are mistaken, just as non-physicists accept that
apparently solid objects are composed of tiny moving particles. But
this may be optimistic. The notion that our souls are flesh is
profoundly troubling, in large part because of what it means for the
idea of life after death. The same sorts of controversies that raged
over the study and teaching of evolution in the 20th century might
well spill over to the cognitive sciences in the years to follow.

Paul Bloom is Professor of Psychology at Yale and author of How
Children Learn the Meanings of Words (Learning, Development, and
Conceptual Change).
________________________________________________________________

"How can we understand the fact that such complex and precise
mathematical relations inhere in nature?"

Of course this is one of the oldest philosophical questions in science
but still one of the most mysterious. For most of Western history the
cannonical answer has been some version of Platonism, some variation
on the esentially Pythagorean idea that the matherial universe has
been formed according to a set of transcendent and a priori
mathematical relations or laws. These relations/laws Pythagaoras
himself called the divine armonia of the cosmos, and have often been
referred to since as the "cosmic harmonies" or the "music of the
spheres". For Pythagoras numbers were actually gods, and the quest for
mathematical relations in nature was a quest for the divine archetypes
by which he believed that matter had literally been in-formed.
Throughout the age of science, and even today, most physicists seem to
be Platonists. Many are even Pythagoreans, implicitly (if not always
with much concious reflection) making an association between the
mathematical laws of nature and a transcendent being. The common
association today of a "theory of everything" with "the mind of God"
is simply the latest efflourescence of a two and a half millenia-old
tradition which has always viewed physics as a quasi-religious
activity.

Can we get beyond Platonism in our understanding of nature's
undeniable propensity to realize extraordinarily sophisticated
mathematical relations? Although I began my own life in science as a
Platonist I have come to believe that this philosophical position is
insupportable. It is not a rationally justifiable position at all, but
simply a faith. Which is fine if one is prepaared to admit as much,
something few physicists seem willing to do. To believe in an a priori
set of laws (perhaps even a single law) by which physical matter had
to be informed seems to me just a disguised version of deism -- an
outgrowth of Judeo-Christianity wrapped up in scientific language. I
believe we should do better than this, that we should articulate (and
need to articulate) a post-Platonist understanding of the so-called
"laws of nature." It is a far from easy task, but not an impossible
one. Just as mathematican Brian Rotman has put forward a
post-Platonist account of mathematics we need to achieve a similar
move for physics and our mathematical description of the world itself.

Margaret Wertheim is a science writer and commentator and the author
of The Pearly Gates of Cyberspace: A History of Space from Dante to
the Internet.
________________________________________________________________

"Where Are They?"
When Enrico Fermi asked his famous question (now known as the Fermi
Paradox) more than fifty years ago -- if there is advanced
extraterrestrial life, intelligence, and technology, why don't we see
unmistakable evidence of it? -- it was the era of 60-megaton
atmospheric bomb tests and broadcast television, with unlimited fusion
power in plain sight.

Now, we don't even have underground testing, TV has gone cable,
wireless is going spread-spectrum, technology has grown microscopic,
our children encrypt text with PGP and swap audio via MP3, and Wolfman
Jack no longer broadcasts across the New Mexico desert at 50,000
watts.

Fermi's question is still worth asking -- and may not be the paradox
we once thought.

George Dyson is a historian among futurists and the author of Darwin
Among the Machines.
________________________________________________________________

"What is the nature of learning?"

That question strikes me as being as infinitely perplexing and
personal as, What's the meaning of life? But that's the beauty of its
ambiguity, and the challenge I enjoy grasping at its slippery
complexity.

Recent insights into the neural basis of memory have provided a couple
of key pieces to the puzzle of learning. The neuropsychological
research on "elaborative encoding," for example, has shown that the
long-term retention of information involves a spontaneous,
connection-making process that produces web-like associative linkages
of evocative images, words, objects, events, ideas, sensory
impressions and experiences.

Parallel insights have emerged from the exploratory work on learning
that's being conducted in the field of education and business, which
involves constructing multi-dimensional symbolic models. The symbolic
modeling process enables people to give form to their thoughts, ideas,
knowledge, and viewpoints. By making tangible the unconscious creative
process by which we use our tacit and explicit knowledge, the symbolic
models help reveal what we think, how we think and what we remember.
They represent our thought processes in a deep and comprehensive way,
showing the different ways we use our many intelligences, styles of
learning, and creative inquiry. In effect, the models demonstrate how
people create things to remember, and remember things by engaging in a
form of physical thinking.
Underneath our layers of individuality lives a core of universal
emotions that comprise a "global common language." This language of
feelings and sensory impressions not only unites us as human beings,
but also connects our creative process. It also enables us to generate
ideas together, create new knowledge and transfer it, come to some
deep shared understanding of ourselves or given subject, as well as
communicate this understanding across the various cultural, social and
educational barriers, that divide us. The studies on elaborative
encoding provide some basic insights into how these symbolic models
work as a kind of global common language, which people use to freely
build on the things they already knew and have an emotional connection
with.
In short: the symbolic models open up other pathways to understand-ing
the brainwork behind learning, remembering and the process by which we
selectively apply what we learn when we create.

As Dr. Barry Gordon of Johns Hopkins School of Medicine states, "What
we think of as memories are ultimately patterns of connection among
nerve cells." The Harvard psychologist Daniel Schachter arrived at a
similar conclusion when examining the 'unconscious processes of
implicit knowledge' and its relation to memory.

Clearly, when our brains are engaged by information that, literally
and figuratively speaking, "connects with us" (in more ways than one),
we not only remember it better, but tend to creatively act on it as
well. Symbolic modeling makes this fact self-evident.
How can we improve the way we learn, and foster the learning process
over a lifetime? How can we make the information we absorb daily more
personally meaningful, purposeful and memorable?

The answers remain to be seen in our connection-making process. This
private act of creation is becoming increasingly more public and
apparent through functional MRI studies and other medical imaging
techniques. Perhaps a more productive strategy for illuminating this
connection-making process would be to combine these high-tech
"windows" to the world of the mind with low-tech imaging tools, such
as symbolic modeling. The combination of these tools would provide a
more comprehensive picture of learning.
The ability to learn  or inability  seems to determine our happiness
and well being, not to mention the success we experience from
realizing our potential. Understanding the conditions that galvanize
great, memorable learning experiences will move us closer to
understanding the creative engine that powers our individual and
collective growth: learning.

Todd Siler is the founder and director of Psi-Phi Communications and
author of Think Like A Genius. 
________________________________________________________________

"Will humankind be able to use its growing self-knowledge to overcome
the biologically programmed instincts that could otherwise destroy
it?"
I am intrigued by the interplay between the following:

1) People always want a little bit more than they have.
2) The economic and political systems built on this instinct are
conquering the world.
3) Yet there is no correlation between owning a little bit more and
happiness. Instead, the long-term effect of everyone seeking to own
a little bit more could be calamitous.

Historically, religious figures have appealed to people to overrule
their greed with a concern for some higher good. In our supposed
scientific age, these arguments have lost their force. Instead our
public affairs are governed by the idea that people should just be
free as much as possible to choose what they want.
But what if people are programmed to make choices that are not in
their own best long-term interest? Suppose we discovered that what we
instinctively thought would bring us happiness is an illusion created
by our human-gene-built brains to induce human-gene-spreading
behavior?
Today's evolutionary psychologists provide compelling arguments why
this picture might be accurate. A species programmed to acquire stuff
might well spread itself successfully across the globe. But evolution
is blind. It has no plan regarding what might happen to that species
when the globe has been conquered. And in the meantime our genes don't
give a damn about our happiness. For them it's just another
propagation technology... perhaps made doubly efficient by ensuring
the carrot is yanked away each time it comes within reach. To achieve
true happiness we may need to be a great deal wiser than the loudest
demons in our head would suggest.
Will the new model of "Why We Are The Way We Are" finally convince us
that our political and economic systems, and the assumptions on which
they are based, are dangerously flawed. (The"problem isn't just the
economists' assumption that "greed is good", or the politicians'
assumption of politics that "growth is good'. We've all been brought
up to believe: "natural is good". As if it weren't the most natural
thing in the world for a planet to self-destruct.)
And how long will it take for the new ideas to have any impact? (What
if it were to take 50 years? In an era of exponential growth, and
accelerating technological change, can we afford even 10?)
More generally, can memes that have evolved in a single generation
countermand the influence of genes that evolved over millions of
years?

Chris Anderson is the incoming Chairman and Host of the TED Conference
(Technology, Education, Design) held each February in Monterey,
California and formerly a magazine publisher (Future Publishing).
________________________________________________________________

"If the medium is indeed the message, does (or can) the message define
the medium?"

(As a poet, I don't think I need to explicate the question.)

Gerd Stern is a poet, media artist and cheese maven and the author of
an oral history From Beat Scene Poet to Psychedelic Multimedia Artist
1948-1978. 
________________________________________________________________

"What is the nature of fads, fashions, crazes, and financial manias?
Do they share a structure that can in turn be found at the core of
more substantial changes in a culture? In other words, is there an
engine of change to be found in the simple fad that can explain and
possibly predict or accelerate broader changes that we regard as less
trivial than "mere" fads? And more importantly, can we quantify the
workings of this engine if we decide that it exists?"

I have shelves of books and papers by smart people who have brushed up
against the edge of this question but who have seldom attacked it head
on. I'm drawn to the question, and have been obsessed with it for
years, because I think it's one of the big ones. It touches on
everything humans do.

Fashions and fads are everywhere; in things as diverse as food,
furnishings, clothes, flowers, children's names, haircuts, body image,
even disease symptoms and surgical operations. Apparently, even the
way we see Nature and frame questions about it is affected to some
extent by fashion; at least according to those who would like to throw
cold water on somebody else's theory. (In the current discussion, Paul
Davies says, "Of late, it is fashionable among leading physicists and
cosmologists to suppose that alongside the physical world we see lies
a stupendous array of alternative realitiesŠ")

But the ubiquity of fads has not led to deep understanding, even
though there are serious uses to which a working knowledge of fads
could be put. A million children each year die of dehydration, often
where rehydration remedies are available. What if rehydration became
fashionable among those children's mothers? Public health officials
have many times tried to make various behaviors fashionable. In
promoting the use of condoms in the Philippines or encouraging girls
in Africa to remain in school, they've reached for popular songs and
comic books to deliver the message, hoping to achieve some kind of
liftoff. Success has been real, but too often temporary or sporadic.
Would a richer understanding of fads have helped them create better
ones?

In trying to understand these phenomena, writers have been engaged in
a conversation that has spanned more than a hundred years. In 1895
Gustave LeBon's speculations on "The Crowd" contained some cockeyed
notions, and some that are still in use today. Ludwik Fleck, writing
on "The Evolution of a Scientific Fact" in the thirties, in part
inspired Thomas Kuhn's writings on the structure of scientific
revolutions in the sixties. Everrett Rogers's books on the "Diffusion
of Innovations" led to hundreds of other books on the subject and made
terms like early adopters and agents of change part of the language.
For several decades positive social change has been attempted through
a practice called Social Marketing, derived in part from advertising
techniques. Diffusion and social marketing models have been used
extensively in philanthropy, often with success. But to my knowledge
these techniques have not yet led to a description of the fad that's
detailed and testable.

Malcom Gladwell was stimulating in identifying elements of the fad in
The Tipping Point but we are still left with a recipe that calls for a
pinch of this and a bit, but not too much, of that.

Richard Dawkins made a dazzling frontal assault on the question when
he introduced the idea of memes in The Selfish Gene. The few pages he
devoted to the idea have inspired a number of books and articles in
which the meme is considered to be a basic building block of social
change, including fads. But as far as I can tell, the meme is still a
fascinating idea that urges us toward experiments that are yet to be
done.

Whether memes or some other formulation turns out to be the engine of
fads, the process seems to go like this: a signal of some kind
produces a response that in turn acts as a signal to the next person,
with the human propensity for imitation possibly playing a role. This
process of signal-response-signal might then spread with growing
momentum, looking something like biological contagion. But other
factors may also apply, as in Steve Strogatz's examination of how
things sync up with one another. Or Duncan Watt's exploration of how
networks of all kinds follow certain rules of efficiency. Or the way
crowds panic in a football stadium or a riot. Or possibly even the
studies on the way traffic flows, including the backward generated
waves that cause mysterious jams. The patterns of propagation may turn
out to be more interesting than anything else.

Fads and fashions have not been taken very seriously, I think, for at
least three reasons. They seem short-lived, they're often silly and
they seem like a break with normal, rational behavior. But as for
being short-lived, the history of fads gives plenty of examples of
fads that died out only to come back again and again, eventually
becoming customary, including the use of coffee, tomatoes and hot
chocolate. As for silliness, some fashions are not as silly as they
seem. Fashions having to do with the length of one's hair seem
trivial; yet political and religious movements have often relied on
the prominence or absence of hair as a rallying symbol. And fads are
far from aberrational. There are probably very few people alive who,
at any one time, are not under the sway of a fad or fashion, if not
dozens of them. And this is not necessarily a vacation from rational
behavior on our part. On the contrary, it might be essential to the
way we maximize the effectiveness of our choices. Two economists in
California have developed a mathematical model suggesting that in
following the lead of others we may be making use of other people's
experience in a way that gives us a slightly higher chance of success
in adopting a new product. The economists say this may explain a burst
of popularity in a new product and possibly throw light on fads
themselves.

But another reason fads may not have been examined in more detail, and
this could be the killer, is that at least for the moment they just
seem too complicated. Trying to figure out how to track and explain
change is one of the oldest and toughest of questions. Explaining
change among people in groups is perhaps complex beyond measure, and
may turn out to be undoable. It may forever be an art and not a
science. But still, the humble fad is too tantalizing to ignore.

We take it for granted and dismiss it, even while we're in the rapture
of it. This commonplace thing that sits there like the purloined
letter may or may not turn out to contain a valuable message for us,
but it is staring us in the face.

Alan Alda, an actor, writer and director, is currently playing Richard
Feynman in the stage play QED at Lincoln Center in New York.
________________________________________________________________

"What comes after Science? When?"
Questions? I don't ask questions. I ask answers, and then make up the
questions as I see fit. I assemble vast collections of answers and
while finding the questions, I make connections in the process. These
connections are new answers, and depending on my mood and how much
time I have at my disposal, I set about finding questions for them as
well. Often, if not usually, the question I find is: "Why on earth am
I wasting my time on this (project du jour)?" Once in a great while,
I'll find that something I've cooked up in my multi-media cauldron
"fits" just right -- an appropriate gesture at a propitious moment,
and it arrives with no explanation, no equation, no excuse, no reason,
nothing- it just sits there -- absolutely correct to itself in every
possible way.

The paradigm of Question/Answer doesn't really work in my world as
I've never really found Life, The Universe, and Everything (LU&E) and
most (but not all) of its constituent parts and systems to be
fundamentally amenable to it. From my research, I've come to a general
conclusion that LU&E and most of its parts are fundamentally not
knowable, or even humanly understandable in any linguistic or
mathematical sense, except when framed in a more narrow set of terms,
like "metaphor" or "pretend" or "just so".

A dear friend of mine once noted: "Nobody knows and you can't find
out" and I largely agree with him. However, I can also say that, like
being in the presence of a bucket of bricks, this is all more an
experiential thing, more like a synchronistic aesthetic moment and
less like a diachronistic or ahistorically definitive mathematical
proposition or linguistically intelligible conclusion. So, one can't
"know" it, nor can one "find out", but one can come to a sensibility
that is convincing at the time and creatively informs one's behaviour
and choices.

Hence, the only justice in this life is poetic, and everything else is
just some tweaky form of petty revenge or (more typically in this life
of entertainment and cultural anaesthesiology) dodging bullets while
one waits for the big storm to blow over.

It can be infuriating (to me and most everyone else, it seems) when my
work or research comes such conclusions, but since when has there been
some big carved-in-stone guarantee that it's supposed to make sense in
the first place? Isn't a rational conclusion a bit presumptuous and
arrogant? From what I can gather it seems that the complete object of
study fundamentally doesn't and shouldn't make sense (as sense seems
to be a tiny subset surrounded by a vast multitude of complex forms of
"nonsense"), and see that not as a shortcoming on the part of the
Universe, as much as it is an indication of the limitations of human
reason and the short time we get to spend on this planet.

But all this is probably not what you wanted to hear, so here's a good
question that's been bugging me for years and if anyone wants to
submit an answer, let me know - I'm all ears...

Mister Warwick asks:

"What comes after Science? When?"

Henry Warwick is an artist, composer, and scientist.
________________________________________________________________

"What is time, and what is the right language to describe change, in a
closed system like the universe, which contains all of its observers?"

This is, I believe, the key question on which the quantum theory of
gravity and our understanding of cosmology, depends. We have made
tremendous progress in the last years towards each goal, and we come
to the point where we need a new answer to this question to proceed
further. The basic reason for this problem is that most notions of
time, change and dynamics which physics, and science more generally,
have used are background dependent. This means that they define time
and change in terms of fixed points of reference which are outside the
system under study and do not themselves change or evolve. These
external points of reference include usually the observer and clocks
used to measure time. They constitute a fixed background against which
time and change are defined. Other aspects of nature usually assumed
to be part of the background are the properties of space, such as its
dimensionality and geometry.

General relativity taught us that time and space are parts of the
dynamical system of the world, that do themselves change and evolve in
time. Furthermore, in cosmology we are interested in the study of a
system that by definition contains everything that exists, including
all possible observers. However, in quantum theory, observers seem to
play a special role, which only makes sense if they are outside the
system. Thus, to discover the right quantum theory of gravity and
cosmology we must find a new way to formulate quantum theory, as well
as the notions of time and change, to apply to a system with no fixed
background, which contains all its possible observers. Such a theory
is called background independent.

The transition from background dependent theories to background
independent ones is a basic theme of contemporary science. Related to
it is the change from describing things in terms of absolute
properties intrinsic to a given elementary particle, to describing
things in terms of relational properties, which define and describe
any part of the universe only through its relationships to the rest.

In loop quantum gravity we have succeeded in constructing a background
independent quantum theory of space and time. But we have not yet
understood completely how to put the observer inside the universe.
String theory, while it solves some problems, has not helped here, as
it is so far a purely background dependent theory. Indeed string
theory is unable to describe closed universes with a positive
cosmological constant, such as observations now favor.

Among the ideas which are now in play which address this issue are
Julian Barbour's proposal that time does not exist, Fotini
Markopoulou's proposal to replace the single quantum theory relevant
for observing a system from the outside with a whole family of quantum
theories, each a description of what an observer might see from a
particular event in the history of the universe and 't Hooft's and
Susskind's holographic principle. This last idea says that physics
cannot describe precisely what is happening inside a region of space,
instead we can only talk about information passing through the
boundary of the region. I believe these are relevant, but none go far
enough and that we need a radical reformulation of our ideas of time
and change.

As the philosopher Peirce said over a century ago, it is fundamentally
irrational to believe in laws of nature that are absolute and
unchanging, and have themselves no origin or explanation. This is an
even more pressing issue now, because we have strong evidence that the
universe, or at least the part in which we live, came into existence
just a few billion years ago. Were the laws of nature waiting around
eternally for a universe to be created to which they could apply? To
resolve this problem we need an evolutionary notion of law itself,
where the laws themselves evolve as the universe does. This was the
motivation for the cosmological natural selection idea that Martin
Rees is so kind to mention. That is, as Peirce understood, the notions
of evolution and self-organization must apply not just to living
things in the universe, but the structure of the universe and the laws
themselves.

Lee Smolin, a theoretical physicist, is a founding member and research
physicist at the Perimeter Institute in Waterloo Canada author of
Three Roads to Quantum Gravity. 
________________________________________________________________

"Why doesn't conservation click?"

Three decades ago I began my first career working on a British
television series called "Survival". Unlike the current "Survivor"
series (about the politics of rejection while camping out) these were
natural history documentaries on a par with the best of National
Geographic and Sir David Attenborough: early recordings of humpback
whales, insights on elephant behavior, the diminishing habitats of
mountain gorillas and orangutans, a sweeping essay on the wildebeest
migration, and my favorite, an innovative look at the ancient baobab
tree.

In 2001 the "Survival" series died. It was a year when conservation
efforts lagged across the board, along with other failures to take the
long view. Survival programs may have told people what they could no
longer bear to hear (that the human species is soiling its own den)
without demonstrating constructive solutions. For example, there are
precious few incentives to develop alternate energy sources despite
the profound vulnerabilities that our dependence on foreign energy
revealed yet again. We have no "Vision Thing," despite the many clues.
"It's global warming, dude," a 28 year-old auto mechanic told The New
York Times as he fished in the Hudson River; "I don't care if the
whole planet burns up in a hundred years. If I can get me a fish
today, it's cool by me."

Happily this provides a continuum to the question I posed at this
forum in 1998:

"If tragedy + time = comedy, what is the formula for equally
therapeutic music? Do (Blues) musicians reach a third person
perspective similar to that found in meditation, mind-altering
drugs, and genius?"

What I was reaching for with that third person perspective was a
selfless overview. What I've since found is that healing dances of
Native Americans and some African peoples follow the saga of a hero or
heroine, much the way you or I listen to Bob Dylan or Bonnie Raitt and
identify with their lyrics.

While Carl Jung delved into the healing ritual archetype among many
cultures, a new science called Biomusicology suggests even more
ancient origins, tracing the inspiration for human music to natural
sounds (the rhythm of waves lapping at the shore, rain and waterfalls,
bird song, breathing, and our mother's heartbeat when we were floating
in the womb.) Songs of birds certainly influenced classical music, and
the call and response patterns of birds were imitated in congregations
and cotton fields, with shouts, which led to the Delta blues.

The salubrious influence of music, including research by Oliver Sacks,
is featured in a Discovery Channel program that I helped research.
"The Power of Music" will be broadcast in 2002, as will Sir David
Attenborough's new series on a similar theme, "Songs of the Earth."
But will these programs inspire viewers to relinquish their SUVs for a
hydrogen-powered car? How does one convince people to address global
warming when most minds are focused on the economy or terrorism?

Part one of this answer must include "An Ounce of Prevention." Richard
A. Clarke, former White House director of counterterrorism, explained
our ill preparedness for September 11 this way: "Democracies don't
prepare well for things that have never happened before." Another
senior analyst said. "Unfortunately, it takes a dramatic event to
focus the government's and public's attention." Finally, efforts to
prevent hijackings have been responsive, rarely proactive.

As we devise our New Year's Resolutions, how many of us will wait for
a scare (positive diagnosis) before we quit smoking, drinking or
sitting on our duff? Year 2002 should be the time when
conservationists not only demand action, but persuade people
everywhere that the demise of wild places can and should be stopped,
that some of our forces of habit (unneeded air conditioning, for
example) will eventually affect our quality of life in ways of greater
devastation. We need people to identify with the song lyrics of
others, who may live in distant lands, and feel the brunt of global
warming long before we do. But first we must learn to understand their
language.
In The Unbearable Lightness of Being, Milan Kundera wrote, "True human
goodness, in all its purity and freedom, can come to the fore only
when its recipient has no power. Mankind's true moral test, its
fundamental test (which lies deeply buried from view), consists of its
attitude toward those who are at its mercy: animals. And in this
respect mankind has suffered a fundamental debacle, a debacle so
fundamental that all others stem from it." Survival indeed.

Delta Willis has searched for fossils alongside Meave and Richard
Leakey, profiled physicists and paleontologists who draw inspiration
from nature, and serves as chief contributor to the Fodor's Guide to
Kenya & Tanzania.
________________________________________________________________

Why is it only amongst adults in the Western world that has tradition
been so insistently and constantly challenged by the raising of Edge
questions?

Why do we ask Edge questions?

Why do we ask Edge questions that challenge the "anesthesiology" of
accepted wisdom and so the traditional answers we are given as to who
and what we are? In most societies, accepted wisdom is to be respected
not questioned, and who and what we are have long been decided by
custom, elders, social betters and the sacred word of God. Moreover,
why is it that the asking of Edge questions has only thrived and been
encouraged in Western societies (with the help of such individuals as
Socrates and the contributors to this Edge project)?

Children it should be noted readily ask Edge-type questions. The
problem is that they stop when they become adults except in the
civilization (with a few ups and downs) that started in Classical
Greece -- Western civilization.

"Are all our beliefs in gods, a myth, a lie foolishly cherished, while
blind hazard rules the world?" That perhaps is the first Edge question
(Euripides, Hecabe, lines 490-491) -- and importantly a question not
raised safely in private but before a large audience. Indeed,
Euripides raised it to gain public reward. Greek playwrights wrote
plays for competitions that were judged by ten randomly selected
members of the audience -- and given Euripides wanted to win -- he
must have believed that the average Greek would be hearing this Edge
question raised about the Gods.

The public exploring of Edge questions is rare outside Western
societies. Instead, "what was finally persuasive was appeal to
established authority", and that, "the authority of tradition came to
have more convincing effect than even direct observation and personal
experience" (Robert Oliver, Communication And Culture In Ancient India
And China, 1971). And as the Japanese scholar Hajime Nakamura noted,
the Chinese "insisted that the traditional sacred books are more
authoritative than knowledge based upon sense and inference" (Ways Of
Thinking Of Eastern Peoples, 1964). Job might seem to be asking the
Edge question "Why do the just suffer and the wicked flourish?" But
the story of Job is not about rewarding Edge questioning but faith in
the wisdom of God: "Who is this that darkens my counsel with words
without knowledge".

This Edge question might be criticized as Eurocentric. But it was
Western intellectuals that first asked the Edge question about whether
ones own culture might be privileged falsely over others and so
invented the idea of ethnocentricity.

So my Edge question is this: why is it only amongst adults in the
Western world that has tradition been so insistently and constantly
challenged by the raising of Edge questions?

John R. Skoyles is a researcher in the evolution of human intelligence
in the light of recent discoveries about the brain, who, while a
first-year student at LSE, published a theory of the origins of
Western Civilization in Nature.
________________________________________________________________

Paul Davies Responds

Response to John McCarthy:

John McCarthy asks how animal behavior is encoded in DNA. May I
sharpen the question? One of the most remarkable manifestations of
inherited behavior is the way birds navigate accurately whilst
migrating over vast distances. I understand that part of this skill
lies with the bird's ability to use the positions of stars as beacons.
Does this imply that some avian DNA contains a map of the sky? Could a
scientist in principle sequence the DNA and reconstruct the
constellations?

Response to Martin Rees's response to my question:

Sir Martin Rees has eloquently outlined the key issues concerning the
status of multiverse theories. I should like to make a brief response
followed by a suggestion for further research.

Sir Martin raises the question of whether what we consider to be
fundamental laws of physics are in fact merely local bylaws applicable
to the universe we perceive. Implicit in this assumption is the fact
that there are laws of some sort anyway. By definition, a law is a
property of nature that is independent of time. We still need to
explain why universes come with such time-independent lawlike
features, even if a vast and random variety of laws is on offer. One
might try to counter this by invoking an extreme version of the
anthropic theory in which there are no laws, just chaos. The apparent
day-by-day lawfulness of the universe would then itself be
anthropically selected: if a crucial regularity of nature suddenly
failed, observers would die and cease to observe. But this theory
seems to be rather easily falsified.

As Sir Martin points out, if a particular remarkable aspect of the
laws is anthropically selected from a truly random set, then we would
expect on statistical grounds the aspect concerned to be just
sufficient to permit biological observers. Consider, then, the law of
conservation of electric charge. At the atomic level, this law is
implied by the assumed constancy of the fine-structure constant. (I
shall sidestep recent claims that this number might vary over
cosmological time scales.) Suppose there were no such fundamental law,
and the unit of electric charge varied randomly from moment to moment?
Would that be life-threatening? Not if the variations were small
enough. The fine-structure constant affects atomic fine-structure, not
gross structure, so that most chemical properties on which life as we
know it depends are not very sensitive to the actual value of this
number.

In fact, the fine-structure constant is known to be constant to better
than one part in a hundred million. A related quantity, the anamolous
magnetic moment of the electron, is known to be constant to even
greater accuracy. Variations several orders of magnitude larger than
this would not render the universe hostile to carbon-based life. So
the constancy of electric charge at the atomic level is an example of
a regularity of nature far in excess of what is demanded by anthropic
considerations. Even a multiverse theory that treated this regularity
as a bylaw would need to explain why such a bylaw exists.

I now turn to my meta-question of whether the multiverse might be no
better than theism in modern scientific language. It is possible that
this claim can be tested using a branch of mathematics known as
algorithmic information theory, developed by Kolmogorov and Chaitin.
This formalism offers a means to quantify Occam's Razor, by
quantifying the complexity of explanations. (Occam's Razor suggests
that, all else being equal, we should prefer the simplest explanation
of the facts.)

On the question of how to explain certain fine-tuned bio-friendly
aspects of the universe, the crude response "God made it that way" is
infinitely complex (and therefore very unsatisfying), because God
might have made one of an infinite number of alternative universes.
Put differently, the selection set -- the "shopping list" of universes
available to an omnipotent Deity -- contains an infinite amount of
information, so the act of selection from this set involves discarding
this infinite quantity of information. In the same way, the multiverse
contains an infinite amount of information. In this case we observers
are the selectors, but we still discard an infinite quantity of
information by failing to observe the other universes. A proper
mathematical parameterization of various multiverse theories and
various theological models should enable this comparison to be made
precise.
Even if the two modes of explanation -- theistic and multiverse --
turned out to be mathematically equivalent -- one might still argue
(as Sir Martin has done) for the superiority of the multiverse theory
on the grounds that the other universes, whilst not directly
observable, are nevertheless strongly implied by extrapolation from
the structure of our physical theories. But a theist would readily
counter that the existence of God, whilst not directly obervable, is
nevertheless strongly implied by extrapolation from the nature of the
world, human wisdom, mystical revelation, moral awareness, etc.

I argued in my book The Mind of God that most attempts at ultimate
explanations run into this "tower of turtles" problem: one has to
start somewhere in the chain of reasoning, with a certain unproved
given, be it God, mathematics, a physical principle, revelation, or
something else. That is because of an implied dualism common to
scientific and theistic explanations alike. In science the dualism is
between states of the world and abstract laws. In theism it is between
creature (i.e. the physical universe) and Creator.

But is this too simplistic? Might the physical world and its
explanation be ultimately indecomposable? Should we consider
alternative modes of description than one based on linear reasoning
from an unproved given, which after all amounts to invoking a magical
levitating superturtle at the base of the tower? That is what I meant
by the "Third Way" in my original question.
________________________________________________________________

Could our lack of theoretical insight in some of the most basic
questions in biology in general, and consciousness in particular, be
related to us having missed a third aspect of reality, which upon
discovery will be seen to always have been there, equally ordinary as
space and time, but so far somehow overlooked in scientific
descriptions?

Is the arena of physics, constructed out of space and time with
matter/energy tightly interwoven with space and time, sufficient to
fully describe all of our material world? The most fundamental debates
in cognitive science take a firm "yes" for granted. The question of
the nature of mind then leaves open only two options: either a form of
reductionism, or a form of escapism. The latter option, a dualist
belief in a separate immaterial mental realm has fallen out of favor,
largely because of the astounding successes of natural science. The
former, reductionism, is all that is left, whether it is presented in
a crude form (denial of consciousness as real or important) or in a
more fancy form (using terms like emergence, as if that would have any
additional explanatory power).

The question I ask myself is whether there could not be another
equally fundamental aspect to reality, on a par with space and time,
and just as much part of the material world?

Imagine that some tribe had no clear concept of time. Thinking only in
terms of space, they would have a neat way to locate everything in
space, and they would scoff at superstitious notions that somehow
there would be "something else", wholly other than space and the
material objects contained therein. Of course they would see things
change, but both during and after each change everything has its
location, and the change would be interpreted as a series of purely
spatial configurations.

Yet such a geometric view of the world is not very practical. In
physics and in daily life we use time in an equally fundamental way as
space. Even though everything is already "filled up" with space,
similarly everything participates in time. Trying to explain that to
the people of the no-time tribe may be difficult. They will see the
attempt at introducing time as trying to sneak in a second type of
space, perhaps a spooky, ethereal space, more refined in some way,
imbued with different powers and possibilities, but still as a
geometric something, since it is in these terms that they are trained
to think. And they probably would see no need for such a parallel
pseudo-space.

In contrast, we do not consider time to be in any way less "physical"
than space. Neither time nor space can be measured as such, but only
through what they make possible: distances, durations, motion. While
space and time are in some sense abstractions, and not perceivable as
such, they are enormously helpful concepts in ordering everything that
is perceivable into a coherent picture. Perhaps our problems in coming
up with a coherent picture of mental phenomena tells us that we need
another abstraction, another condition of possibility for phenomena in
this world, this very material world we have always lived in.

Could it be that we are like that tribe of geometers, and that we have
so far overlooked a third aspect of reality, even though it may be
staring us in the face? Greek mathematicians used time to make their
mathematical drawings and construct their theories, yet they
disregarded time as non essential in favor of a Platonic view of
unchanging eternal truths. It took two thousand years until Newton and
Leibniz invented infinitesimal calculus, which opened the door for
time to finally enter mathematics, thus making mathematical physics
possible.

To reframe my question: could our lack of theoretical insight in some
of the most basic questions in biology in general, and consciousness
in particular, be related to us having missed a third aspect of
reality, which upon discovery will be seen to always have been there,
equally ordinary as space and time, but so far somehow overlooked in
scientific descriptions?

Although I don't know the answer, I suspect we will stumble upon it
through a trigger that will come from engineering. Newton did not work
in a vacuum. He built upon what Galileo, Descartes, Huygens and others
had discovered before him, and many of those earlier investigations
were triggered by concrete applications, in particular the
construction of powerful canons calling for better ways to compute
ballistic orbits. Another example is the invention of thermodynamics.
It took almost two centuries for Newtonian mechanics to come to grips
with time irreversibility. Of course, every physicist had seen how
stirring sugar in a cup of tea is not reversible, but until
thermodynamics and statistical mechanics came along, that aspect of
reality had mostly been ignored. The engineering problems posed by the
invention of steam engines were what forced a deeper thinking about
time reversibility.

Perhaps current engineering challenges, from quantum computers to
robotics to attempts to simulate large-scale neural interactions, will
trigger a fresh way of looking at the arena of space and time,
perchance finding that we have been overlooking an aspect of material
reality that has been quietly with us all along.

Piet Hut, professor of astrophysics at the Institute for Advanced
Study, in Princeton, is involved in the project of building GRAPEs,
the world's fastest special-purpose computers.
________________________________________________________________

"Is the universe really expanding? Or: Did Einstein get it exactly
right?"

As I prepare to head for Cambridge (the Brits' one) for the conference
to mark Stephen Hawking's 60th birthday, I know that the suggestion I
am just about to make will strike the great and the good who are
assembling for the event as my scientific suicide note. Suggesting
time does not exist is not half as dangerous for one's reputation as
questioning the expansion of the universe. That is currently believed
as firmly as terrestrial immobility in the happy pre-Copernican days.
Yet the idea that the universe in its totality is expanding is odd to
say the least. Surely things like size are relative? With respect to
what can one say the universe expands?

When I put this question to the truly great astrophysicists of our day
like Martin Rees, the kind of answer I get is that what is actually
happening is that the intergalactic separations are increasing
compared with the atomic scales. That's relative, so everything is
fine. Some theoreticians give a quite different answer and refer to
the famous failed attempt of Hermann Weyl in 1917 to create a
genuinely scale-invariant theory of gravity and unify it with
electromagnetism at the same time. That theory, beautiful though it
was, never made it out of its cot. Einstein destroyed it before it was
even published with the simple remark that Weyl's theory would make
the spectral lines emitted by atoms depend on their prior histories,
in flagrant contradiction to observation. Polite in public, Einstein
privately called Weyl's theory 'geistreicher Unfug' [inspired
nonsense].

Ever since that time it seems to have been agreed that, for some
inscrutable reason, the quantum mechanics of atoms and elementary
particles puts an absolute scale into physics. Towards the end of his
life, still smarting from Einstein's rap, Weyl wrote ruefully "the
facts of atomism teach us that length is not relative but absolute"
and went one to bury his own cherished ambition with the words
"physics can never be reduced to geometry as Descartes had hoped".

I am not sure the Cartesian dream is dead even though the current
observational evidence for expansion from a Big Bang is rather
impressive. The argument from quantum mechanics, which leads to the
identification of the famous Planck length as an absolute unit, seems
to me inconclusive. It must be premature to attempt definitive
statements in the present absence of a theory of quantum gravity or
quantum cosmology. And the argument about the relativity of scale
being reflected in the changing ratio of the atomic dimensions to the
Hubble scale is vulnerable.

To argue this last point is the purpose of my contribution, which I
shall do by a much simpler example, for which, however, the principle
is just the same. Consider N point particles in Euclidean space. If N
is greater than three, the standard Newtonian description of this
system is based on 3N + 1 numbers. The 3N (=3xN) are used to locate
the particles in space, and the extra 1 is the time. For an isolated
dynamical system, such as we might reasonably conjecture the universe
to be, three of the numbers are actually superfluous. This is because
no meaning attaches to the three coordinates that specify the position
of the centre of mass. This is a consequence of the relativity
principle attributed to Galileo, although it was actually first
cleanly formulated by Christiaan Huygens (and then, of course,
brilliantly generalized by Einstein). The remaining 3N - 2 numbers
constitute an oddly heterogeneous lot. One is the time, three describe
orientation in space (but how can the complete universe have an
orientation?), one describes the overall scale, and the remaining 3N -
7 describe the intrinsic shape of the system. The only numbers that
are not suspect are the last: the shape variables.

Developing further ideas first put forward in 1902 in his Science and
Hypothesis by the great French mathematician Poincare [ascii does not
allow me to put the accent on his e], I have been advocating for a
while a dynamics of pure shape. The idea is that the instantaneous
intrinsic shape of the universe and the sense in which it is changing
should be enough to specify a dynamical history of the universe. Let
me spell this out for the celebrated 3 body problem of Newtonian
celestial mechanics. In each instant, the instantaneous triangle that
they form has a shape that can be specified by two angles, i.e., just
two numbers. These numbers are coordinates on the space of possible
shapes of the system. By the 'sense' in which the shape is changing I
mean the direction of change of the shape in this two-dimensional
shape space. That needs only one number to specify it. So a dynamics
of pure shape, one that satisfies what I call the Poincare criterion,
should need only three essential numbers to set up initial conditions.
That's the only ideal that, in Poincare's words, would give the mind
satisfaction. It's the ideal that inspired Weyl (though he attacked
the problem rather differently).

Now how does Newtonian dynamics fare in the light of the Poincare
criterion? Oddly enough, despite centuries of dynamical studies, this
question hardly seems to have been addressed by anyone. However,
during the last year, working with some N-body specialists, I have
established that Newtonian mechanics falls short of the ideal of a
dynamics of pure shape by no fewer than five numbers. Seen from the
rational perspective of shape, Newtonian dynamics is very complicated.
This is why the study of the Moon (which forms part of the archetypal
Earth-Moon-Sun three-body problem) gave Newton headaches. Among the
five trouble makers (which I won't list in full or discuss here), the
most obstreperous is the one that determines the scale or size. The
same five trouble makers are present for all systems of N point
particles for N equal to or greater than 3. Incidentally, the reason
why 3-body dynamics is so utterly different from 2-body dynamics is
that shape only enters the picture when N = 3. Most theoretical
physicists get their intuition for dynamics from the study of
Newtonian 2-body dynamics (the Kepler problem). It's a poor guide to
the real world.

The point of adding up the number of the variables that count in the
initial value problem is this. The Newtonian three-body problem can be
expressed perfectly well in terms of ratios. One can consider how the
ratios of the individual sides to the perimeter of the triangle change
during the evolution. This is analogous to following the evolution of
the ratio of the atomic-radii to the Hubble radius in cosmology. To
see if scale truly plays no role, one must go further. One must ask:
do the observable ratios change in the simplest way possible as
dictated by a dynamics of pure shape, or is the evolution more
complicated? That is the acid test. If it is failed, absolute scale is
playing its pernicious role. The Poincare criterion is an infallible
test of purity.

Both Newtonian dynamics and Einstein's general relativity fail it. The
fault is not in quantum mechanics but in the most basic structure of
both theories. Scale counts. In fact, seen from this dynamical
perspective Einstein's theory is truly odd. As James York, one of John
Wheeler's students in Princeton, showed 30 years ago (in a beautiful
piece of work that I regard as the highest point achieved to date in
dynamical studies), the most illuminating way to characterize
Einstein's theory is that it describes the mutual interaction of
infinitely many degrees of freedom representing the pure shape of the
universe with one single solitary extra variable that describes the
instantaneous size of the universe (i.e., its 3-dimensional volume in
the case of a closed universe). From Poincare's perspective, this
extra variable, to put frankly, stinks, but the whole of modern
cosmology hangs on it: it is used to explain the Hubble red shift.

There, I have stuck my neck out in good Popperian fashion. Current
observations suggest I will have my head chopped off and Einstein will
be vindicated. Certainly all the part of his theory to do with pure
shape is philosophically highly pleasing and is supported by wonderful
data. But even if true dynamical expansion is the correct explanation
of the Hubble red shift, why did nature do something so unaesthetic?
As I hope to show very shortly on the Los Alamos bulletin board,
dynamics of pure shape can mimic a true Hubble expansion. The fact is
that Einstein's theory allows red shifts of two kinds: one is due to
stretching (expansion) of space, while the other is the famous
gravitational red shift that makes clocks on the Earth run at a now
observable amount slower than clocks in satellites. It is possible to
eliminate scale from Einstein's theory, as Niall O'Murchadha and I
have shown. This kills the stretching red shift but leaves the other
intact. It is just possible that this could explain the Hubble red
shift.
Let me conclude this possibly premature (but I feel justified, since
all dogmas need to be challenged) contribution by pointing out that
according to the standard Big-Bang scenario two things have been
happening simultaneously since something lit the fuse: the universe
has been expanding from an extraordinarily uniform and isotropic
compressed state and it has simultaneously been getting more and more
clumpy. Inflationists claim to have explained why we observe such a
uniform Big Bang, but sceptics (which include me) have the
uncomfortable feeling that an observational cosmic coincidence is
merely being described, rather than explained, by theoretical fine
tuning of an adjustable parameter. In a self-respecting universe that
dismisses size as opposed to shape as a fiction, sharper predictions
must be possible. In a dynamics of pure shape, the only thing that can
happen is change of shape. That must explain the Hubble red shift.
Merely by observing the rate at which matter and the universe in
general becomes more clumpy, above all the rate of formation of
gravitationally collapsed objects, astronomers ought to be able to
predict the value of the Hubble constant.

So my challenge to the theoreticians is this: Are you absolutely sure
Einstein got it exactly right? Prove me wrong in my hunch that the
universe obeys a dynamics of pure shape subtly different from
Einstein's theory. If size does count, why should nature do something
so puzzling to the rational mind?
Julian Barbour is an independent theoretical physicist and author of
The End of Time.
________________________________________________________________

"When will we emerge from the quantum tunnel of obscurity?"
Can contradictory things happen at the same time? Does the universe
continue about its business when we're not looking at it? These
questions have been raised in the context of quantum mechanics ever
since the theory was formulated in the 1920s. While most physicists
dismissed these issues as "just philosophical", a small minority
(inspired by the examples of Louis de Broglie, Albert Einstein and
Erwin Schroedinger) continued to question the meaning of the most
successful theory of science, and often suffered marginalisation and
even ridicule.
It is one thing to apply quantum mechanics to calculate atomic energy
levels or the rate at which atoms emit light. But as soon as one asks
what is actually happening during an atomic transition, quantum
mechanics gives no clear answer. The Copenhagen interpretation, forged
by Niels Bohr and Werner Heisenberg, emphasises the subjective
experience of "observers" and avoids any description of an objective
reality; it talks about the chances of different outcomes occuring in
a measurement, but does not say what causes a particular outcome to
occur. For decades, students have been taught to avoid asking probing
questions. An attitude of "shut-up-and-calculate" has dominated the
field. The result is widespread confusion, and a strange unwillingness
to ask clear and direct questions. As the late cosmologist Dennis
Sciama once put it, whenever the subject of the interpretation of
quantum mechanics comes up "the standard of discussion drops to zero".
The publication of John Bell's book Speakable and Unspeakable in
Quantum Mechanics in 1987 provided a point of reference for a change
in attitude that gained real momentum in the 1990s.
Bell spearheaded a movement to purge physics of some inherently vague
notions inherited from the founding fathers of quantum mechanics. For
instance the "measurement apparatus" was treated by Bohr and
Heisenberg as something fundamentally distinct from the "system being
measured": the latter was subject to the laws of quantum mechanics
whereas the former was not. But if everything -- including our
equipment -- is made of atoms, how can such a distinction be anything
more than an approximation? In reality everything -- "system",
"apparatus", even human "observers" -- should obey the same laws of
physics. The clarity of Bell's writings forced many people to confront
the uncomfortable fact that quantum mechanics as usually formulated
had a problem explaining why we see definite events taking place.
Bell advertised what he saw as two promising avenues to resolve the
quantum paradoxes: the theory must be supplemented either with a new
random process that selects outcomes (the "dynamical reduction of the
state vector") or with extra "hidden variables" whose unknown values
select outcomes. Theories of both types have been constructed. Indeed,
a correct hidden -- variables theory was written down by Louis de
Broglie as long ago as 1927, and was shown by David Bohm in 1952 to
account completely for quantum phenomena. The de Broglie -- Bohm
theory gave an objective account of quantum physics; yet, until about
10 years ago, most physicists had not heard of it. Today, many have
heard of it, but still very few understand it or work on it. And it is
still not taught to students (even though in my experience many
students would love to know more about this theory).
One wonders where things will go from here. On the one hand, in the
last five years the subject of the interpretation of quantum mechanics
has suddenly become more respectable thanks to the rising technology
of quantum information and computation, which has shown that something
of practical use -- novel forms of communication and computation --
can emerge from thoughts about the meaning of quantum mechanics. But
on the other hand, there is a danger that the problem of the
interpretation of quantum mechanics will be pushed aside in the rush
to develop "real" technological applications of the peculiarities of
quantum phenomena.
The rise of quantum information theory has also generated a widespread
feeling that "information" is somehow the basic building block of the
universe. But information about what? About information itself? As
noted by P.W. Anderson in a recent Edge comment on Seth Lloyd, not
only does it seem unjustified to claim that "information" is the basic
stuff of the universe: worse, an unfortunate tendency has developed in
some quarters to regard the theory of information as the only really
fundamental area of reseach. Personally, I find quantum information
theory very interesting, and it has without doubt enriched our
understanding of the quantum world: but I fear that in the long run
its most enthusiastic practitioners may lead us back to the vague
subjectivist thinking from which we were only just emerging.

Antony Valentini is a theoretical physicist at Imperial College in
London.
________________________________________________________________

"How does being able to learn about a changing world endow our minds
with expectations, imagination, creativity, and the ability to
perceive illusions?"

When you open your eyes in the morning, you usually see what you
expect to see. Often it will be your bedroom, with things where you
left them before you went to sleep. What if you opened your eyes and
found yourself in a steaming tropical jungle? or a dark cold dungeon?
What a shock that would be! Why do we have expectations about what is
about to happen to us? Why do we get surprised when something
unexpected happens to us? More generally, why are we Intentional
Beings who are always projecting our expectations into the future? How
does having such expectations help us to fantasize and plan events
that have not yet occurred? How do they help us to pay attention to
events that are really important to us, and spare us from being
overwhelmed by the blooming buzzing confusion of daily life? Without
this ability, all creative thought would be impossible, and we could
not imagine different possible futures for ourselves, or our hopes and
fears for them. What is the difference between having a fantasy and
experiencing what is really there? What is the difference between
illusion and reality? What goes wrong when we lose control over our
fantasies and hallucinate objects and events that are not really
there? Given that vivid hallucinations are possible, especially in
mental disorders like schizophrenia, how can we ever be sure that an
experience is really happening and is not just a particularly vivid
hallucination? If there a fundamental difference between reality,
fantasy, and illusion, then what is it?

Recent models of how the brain controls behavior have begun to clarify
how the mechanisms that enable us to learn quickly about a changing
world throughout life also embody properties of expectation,
intention, attention, illusion, fantasy, hallucination, and even
consciousness. I never thought that during my own life such models
would develop to the point that the dynamics of identified nerve cells
in known anatomies could be quantitatively simulated, along with the
behaviors that they control. During the last five years, ever-more
precise models of such brain processes have been discovered, including
detailed answers to why the cerebral cortex, which is the seat of all
our higher intelligence, is organized into layers of cells that
interact with each other in characteristic ways.

Although an enormous amount of work still remains to be done before
such insights are fully developed, tested, and accepted, the outlines
already seem clear of an emerging theory of biological intelligence,
and with it, the scaffold for a more humane form of artificial
intelligence. Getting a better understanding of how our minds learn
about a changing world, and of how to embody their best features in
more intelligent technologies, should ultimately have a transforming
effect on many aspects of human civilization.

Stephen Grossberg is a Professor of Cognitive and Neural Systems,
Mathematics, Psychology, and Engineering at Boston University.
________________________________________________________________

"How will computation and communication change our everyday lives,
again?"
The actual day to day things that we do have been changed drastically
for many people in the world over the last twenty years by the arrival
of personal computers. We spend hours each day in front of a screen,
typing. This was not the norm twenty years ago (although a few of us
did it even then), and no one had access to the vast stores of
information that are available to us on our laps now. We no longer ask
for reprints or go to the library, but instead download pdf versions
of papers that interest us. We no longer need to go to reference works
but instead retrieve them directly on our PCs. The number of people
that we correspond with has increased dramatically -- granted, the
medium has changed too. And chatting on the phone to people on the
other side of the world is no longer expensive or an event -- it is
just as common and cheap as calling someone a hundred miles away. Our
interaction with media is changing too -- it is becoming more and more
pull rather than push, even for TV and radio entertainment -- we
choose when and where we want to receive it, and how we will store it.
Surprisingly, neither the book, nor the movie, nor the documentary are
dead. There are more of them, in fact, although the method of delivery
is slowly changing. We have increased our number of options rather
than supplanted the old ones.
Moore's law and the increase of telecommunications infrastructure are
both continuing. What new options should we expect, and how will they
change the way we work? What will be the next "web", as unimagined by
most educated people today as our current one was in 1988? And what
will be the impact of the new methods of delivery we can expect to be
developed in the next 20 years?
Already tens of thousands of people have cochlear implants with direct
electronic to neural connections to restore their hearing. Multiple
groups are working on retinal implants, either into the eyeball, or
interfacing to V1 at the back of the head; again to replace lost
capabilities such as those resulting from macular degeneration. A few
quadraplegics have direct neural connections to computer interfaces so
that they can control a mouse and even type. As progress is made with
these silicon/neural interfaces, pushed along by clinical pressures to
cure those who are impaired, we can expect more and more "plastic
surgery" applications. A direct neural typing interface first perhaps,
and later data going the other way directly from the network into our
brains. There are considerable challenges to be met in understanding
neural "coding" to do this, but the clinical imperative is pushing
this work along.
How will we all be in the world then, 20 years from now say, when we
all have direct wireless connections to the Internet of that time with
information services as yet unimaginable? How will our grandchildren's
interaction with information change the way they work and think, in
the same way that instant messaging and vast numbers of web pages have
changed the way our children in elementary and high school operate
today?

Rodney Brooks is Director of the MIT Artificial Intelligence
Laboratory, and Fujitsu Professor of Computer Science. He is also
Chairman and Chief Technical Officer of IS Robotics,
________________________________________________________________

"Would an extra-terrestrial civilization develop the same mathematics
as ours? If not, how could theirs possibly be different?"
In writing my next book, about maths, I have been led to ponder this
question by the fact that there are philosophers, and a few
mathematicians, who believe that it is conceivable that there could be
intelligences with a fully developed mathematics that does not, for
example, recognize the integers or the primes, let alone Fermat's Last
Theorem or the Riemann Hypothesis. And yet, whole numbers seem to us
such a basic property of "things", that unless there were
intelligences that were not embodied in any way (and/or couldn't "see"
the discrete stars, for example) they would be bound to come across
number and all that follows. But then, I suppose you could imagine
intelligent beings which consisted, say, of density differences in a
gas but lacked boundaries separating one from another. In any case, if
such creatures do exist, it rather pours cold water on the use by SETI
of maths (e.g. prime x prime pictorial grids) to communicate with them

Karl Sabbagh is a writer and television producer and author of A Rum
Affair: A True Story of Botanical Fraud.
________________________________________________________________

"Why do we fear the wrong things?"

A mountain of research shows that our fears modestly correlate with
reality. With images of September 11th lingering in their mind's eye,
many people dread flying to Florida for Spring break, but will instead
drive there with confidence -- though, mile per mile, driving during
the last half of the 1990s was 37 times more dangerous than flying.

Will yesterday's safety statistics predict the future? Even if not,
terrorists could have taken down 50 more planes with 60 passengers
each and -- if we'd kept flying -- we'd still have been ended last
year safer on commercial flights than on the road. Flying may be
scary, but driving the same distance should be many times scarier.

Our perilous intuitions about risks lead us to spend in ways that
value some lives hundreds of times more than other lives. We'll now
spend tens of billions to calm our fears about flying, while
subsidizing tobacco, which claims more than 400,000 lives a year.

It's perfectly normal to fear purposeful violence from those who hate
us. But with our emotions now calming a bit, perhaps it's time to
check our fears against facts. To be prudent is to be mindful of the
realities of how humans suffer and die.

(To see my question developed -- and answered -- please click here).
David G. Myers is a social psychologist David G. Myers at Hope College
(Michigan) and author of The Pursuit of Happiness.
________________________________________________________________

"Are the laws of nature a form of computer code that needs and uses
error correction?"
John D. Barrow is Research Professor of Mathematical Sciences,
University of Cambridge and author of Between Inner Space and Outer
Space.
________________________________________________________________

"Can we ever escape our past, or are we doomed to a future of
biobabble?"

In mid-November 1999, New Yorker writer Rebecca Mead published a
commentary on the candidacy of Al Gore, and in it she gave us a new
word. In the old days, candidates were advised in a pseudo-Freudian
frame. Clinton, in pre-Monica times, was told to emphasize his role as
"strong, assertive, and a good father." Now, however, this
psychobabble has been eclipsed by what she called biobabble and Mead
recommended that Gore's advice might best be based on evolutionary
psychology instead of Freud. In other words, it wasn't your parents
who screwed you up, it was the ancient environment. Mead cites Sarah
Hrdy, a primatologist, as suggesting that the ideal presidential
leader would be a grandma whose grandchildren were taken away and
scattered across the country in secret locations. Then the president
could be expected to act on the behalf of the general good, to
maximize her reproductive fitness. No wonder Gore wasn't appointed.

This is déjà vu all over again, and after the last century of
biopolicy in action, can we still afford to be here? Somehow we can't
get away from a fixation on the link between biology and behavior. A
causal relationship was long championed by the Mendelian Darwinians of
the Western World, as breeding and sterilization programs to get rid
of the genes for mental deficiencies became programs to get rid of the
genes for all sorts of undesirable social behaviors, and then programs
to get rid of the undesirable races with the imagined objectionable
social behaviors. Science finally stepped back from the abyss of human
tragedy that inevitably ensued, and one result was to break this link
by questioning whether human races are valid biological entities. By
now, generations of biological anthropologists have denied the biology
of race. Arguing that human races are socially constructed categories
and not biologically defined ones, biological anthropologists have
been teaching that if we must make categories for people, "ethnic
group" should replace "race" in describing them.

The public has been listening. This is how the U.S. census came to
combine categories that Americans base on skin color
"African-American," delineated by "one drop of blood" with categories
based on language "Latino." However ethnic groups revitalize the
behavioral issue because ethnicity and behavior are indeed related,
although not by biology, but by culture. This relationship is
implicitly accepted as the grounds for the profiling we have heard so
much about of late, but here is the rub. Profiling has accomplished
more than just making it easier to predict behaviors, actually
revitalizing the issue of biology and behavior by bringing back "race"
as a substitution for "ethnic group." This might well have been an
unintended consequence of using "race" and "ethnic group"
interchangeably, because this usage forged a replacement link between
human biology and human culture. Yet however it happened, we are back
where we started, toying with the notion that human groups defined by
their biology differ in their behavior.

And so, how do we get out of this? Can we? Or does the programming
that comes shrink-wrapped with our state-of-the-art hardware continue
to return our thinking to this point because of some past adaptive
advantage it brought? It doesn't seem very advantageous right now.

Milford H. Wolpoff is Professor of Anthropology at the University of
Michigan and author (with Rachel Caspari) of Race and Human Evolution:
A Fatal Attraction.
________________________________________________________________

"How different could life have been?"
Physicists, including several in this group, are fond of asking, "What
if the universe had been different?" Are the fundamental constants
just numbers we accept as given, but which could have been different?
Or is there some deeper rationale, which we shall eventually discover,
that renders them unfree to change? Is our universe the way universes
have to be? Or is it one of a huge ensemble of universes? Given
present company, I would not aspire to this question, fascinating as
it is. Mine is its biological little brother. Is the life that we
observe the way life has to be? Or could we imagine other kinds of
life? Long the stock in trade of science fiction, I want to move it
closer to science's domain. Unfortunately the question is one for a
chemist - which I am not. My hope is that chemists will listen, and
work on it.
Life as we know it is far more uniform than superficially appears. The
differences between an elephant and an amoeba are superficial.
Biochemically speaking, we are all playing most of the same tricks. At
this level, most of the variation in life is to be found among the
bacteria. We large animals and plants have just specialised in a few
of the tricks that bacterial R & D developed in the Precambrian.
But all living things, bacteria included, practise the same
fundamental tricks. Using the universal DNA code, the one-dimensional
sequence of DNA codons specifies the one-dimensional sequence of amino
acids in proteins. This determines the proteins' three-dimensional
coiling, which specifies their enzymatic activity, and this, in turn
specifies almost everything else. So, I'm not talking about whether
living things on other planets will look like us, or will have
television aerials sticking on their heads. It is easy to predict that
heavy planets with high gravitational fields will breed elephants the
size of flies (or flies built like elephants); light planets will grow
elephant-sized flies with spindly legs. It is easy to predict that,
where there is light, there will be eyes. This is not what I am
talking about. I want the answer to a more fundamental question.
My question, which is for chemists, is this. Can you devise a
fundamentally different, alternative biochemistry? Given that, as I
firmly believe, life all over the universe must have evolved by the
differential survival of something corresponding to genes -
self-replicating codes whose nature influences their own long-term
survival - do they have to be strung along polynucleotides? The
genetic code itself almost certainly didn't have to be the one we
actually have - plenty of other codes would have done the job. Ours is
a frozen accident which, once crystallised, could not change. But can
you think of a completely different kind of molecule, not a
polynucleotide at all, perhaps not even organic, which could do the
coding? Does it have to be digital like the DNA/RNA code, or could
some kind of analogue code be accurate and stable enough to mediate
evolution? Does it even have to be a one-dimensional code? And is
there any other class of molecules that could step into the shoes of
proteins?
Biochemists, please stop focusing exclusively on the way life actually
is. Think about how life might have been. Or how life could be on
other worlds. Channel your creativity to devising a complete,
alternative biochemistry, whose components are radically different
from the ones we know, but are at the same time mutually compatible -
participants in a wholly consistent system which your chemical
calculations show could actually work.
Why should we want this? I wanted to ask the question, "Is there life
on other worlds, and how similar is it to the life we know?" But there
is no immediate prospect of our receiving direct answers to these
questions, and I am pessimistic of our ever doing so. Life has
probably arisen more than once, but on islands in space too widely
scattered to make a meeting likely. Theoretical calculations may be
our best hope, and are certainly our most immediate hope, of at least
estimating the probabilities. There's also the point, which hardly
needs making on Edge, that to seek the unfamiliar is a good way to
illuminate oneself.
Reply to Paul Davies's response to John McCarthy
Paul Davies notes that some night-migrating birds navigate by the
stars, and asks whether avian DNA contains a map of the sky. "Could a
scientist in principle sequence the DNA and reconstruct the
constellations?" Alas, no.
Stephen Emlen, of Cornell University, researched the matter in 1975.
He placed Indigo Buntings in a circular cage in the centre of a
planetarium, and measured their fluttering against different sides of
the cage as an indicator of their preferred migratory direction. By
manipulating the star patterns in the planetarium, blotting out
patches of sky and so on, Emlen showed that the buntings did indeed
use Polaris as their North, and they recognized it by the surrounding
pattern of constellations.
So far so good. Now comes the interesting part. Is the pattern of
stars built into the birds' DNA, or is there some other, more general
way to define the north (or south) pole of the heavens? Put it like
that, and the point jumps out at you: the polar position in the sky
can be defined as the centre of rotation! It is the hub that stays
still, while the rest of the heavens turn. Did the birds use this as a
rule for learning?
Emlen reared young buntings in the planetarium, giving them experience
of different artificial `night skies'. Half of them, the controls,
experienced a night sky that rotated about Polaris, as usual. The
other half, the experimental birds, experienced a night sky in which
the centre of rotation was Betelgeuse. The control birds ended up
steering by Polaris, as usual. But the experimental birds, mirabile
dictu, came to treat Betelgeuse as though it was due north. Clever, or
what?
Richard Dawkins is an evolutionary biologist and the Charles Simonyi
Professor For The Understanding Of Science at Oxford University. He is
the author of Unweaving the Rainbow.
________________________________________________________________

"How are moral assertions connected with the world of facts?"
Unlike many ancient philosophical problems, this one has,
paradoxically, been made both more urgent and less tractable by the
gradual triumph of scientific rationality. Indeed, the prevailing
modern attitude towards it is a sort of dogmatic despair: `you can`t
get an ought from an is, therefore morality must be outside the domain
of reason`. Having fallen for that non-sequitur, one has only two
options: either to embrace unreason, or to try living without ever
making a moral judgement. In either case, one becomes a menace to
oneself and everyone else.
On the tape of the bin Laden dinner party, a participant states his
belief that during the September 11 attack, Americans were afraid that
a coup d'état was under way. Worldwide, tens of millions of people
believe that the Israeli secret service carried out the attack. These
are factual misconceptions, yet they bear the imprint of moral
wrongness just as clearly as a fossil bears the imprint of life. This
illustrates an important strand in the fabric of reality: although
factual and moral assertions are logically independent (one cannot
deduce either from the other), factual and moral explanations are not.
There is an explanatory link between ought and is, and this provides
one of the ways in which reason can indeed address moral issues.
Jacob Bronowski pointed out that a commitment to discovering
scientific truth entails a commitment to certain values, such as
tolerance, integrity, and openness to ideas and to change. But there`s
more to it than that. Not only scientific discovery, but scientific
understanding itself can depend on one's moral stance. Just look at
the difficulty that creationists have in understanding what the theory
of evolution says. Look at the prevalence of conspiracy theories among
the supporters of bad causes, and how such people are systematically
blind to rational argument about the facts of the matter. And,
conversely, look at Galileo, whose factual truth-seeking forced him to
question the Church's moral authority.
Why does this happen? We should not be surprised - at least, no more
surprised than we are that, say, scientific and mathematical
explanations are connected. The truth has structural unity as well as
logical consistency, and I guess that no true explanation is entirely
disconnected from any other. In particular, in order to understand the
moral landscape in terms of a given set of values, one needs to
understand some facts as being a certain way too, and vice versa.
Moreover, I think it is a general principle that morally right values
are connected in this way with true factual theories, and morally
wrong values with false theories.
What sort of principle is this? Though it refers to morality, at root
it is epistemological. It is about the structure of true explanations,
and about the circumstances under which knowledge can or cannot grow.
This, in turn, makes it ultimately a physical fact - but that is
another story.
David Deutsch, a physicist, is a member of the Centre for Quantum
Computation at the Clarendon Laboratory, Oxford University, and author
of The Fabric of Reality. 
________________________________________________________________

"Why is beauty making a comeback now?"
My hypothesis is that the modernist/post-modernist idea that beauty is
a social construct (with no deep bedrock in reality) is dead.

There are an increasing number of books coming out propounding the
notion that beauty is real and crosses all sorts of cultural and
historic lines. In their view, that which unites us as a species in
the perception of beauty is way larger than what divides us.

My big question is whether, in a disjointed world in which the search
for meaning is becoming ever more important, the existence of widely
agreed upon ideas of beauty will increasingly become a quick and
useful horseback way of determing whether or not *any* complex system,
human or technological, is coherent.

This idea draws in part from pre-industrial age definitions of beauty
that held that "Beauty is truth, truth beauty -- that is all ye know
on earth, and all ye need to know" (Keats, 1820), and most important,
"The most general definition of beauty....multeity in unity"
(Coleridge, 1814).

Interestingly enough, the idea that I view as increasingly dumb,
"Beauty is in the eye of the beholder" Bartlett's dates only to 1878,
which is about when the trouble started, in my view.

Joel Garreau is the cultural revolution correspondent of The
Washington Post and author of Edge City.
________________________________________________________________

"Do wormholes exist?"
Two startling ideas about wholly different classes of objects emerged
from general relativity: black holes and wormholes. For over half a
century black holes have grown in importance, with many convincing
candidates in the sky and a vast range of theoretical support.
"Einstein bridges" as they were first called, emerged in the 1930s,
yet have not met with nearly the attention they deserve. We still
don't know if any were made in the early universe. That seems by far
the easiest way to find one<inherit it from the Big Bang<because to be
stable they demand exotic matter. Matt Visser's Lorentzian Wormholes
(1996) details the many types of wormholes allowed by theory. It's an
impressive range, mostly unexplored theoretically.

If they do exist, they could lead to interstellar travel--indeed, to
instantaneous access to points at the far range of the universe. They
would also confirm both general relativity and the discovery of exotic
matter. But curiously little thought seems given to detecting
wormholes, or theorizing about how small, stable ones might have
evolved since the early universe. Several co-authors and I proposed
using the Massive Compact Halo Object (MACHO) searches to reveal a
special class--"negative mass" wormholes--since they would appear as
sharp, two-peaked optical features, due to gravitational lensing
(Physical Review D 51, p3117-20, 1995) So far all the two peaked cases
found have been attributed to binary stars or companion planets,
though the data fits are not very close.

Surely there could be other ways to see such exotic objects. Some
thought and calculations about wormhole evolution might produce a
checkable prediction, as a sidelight to an existing search. Further
thought is needed about the implications that extra dimensions from
string theory will have on wormholes. It seems theoretically plausible
that the inflationary phase of the early universe might have made
negative mass string loops framing stable Visser-type wormholes.

Perhaps wormholes do not exist. A plausible search that yielded
nothing would still be a result, because we could learn something
about the possibility of exotic matter. A positive result, especially
detection of a wormhole we could reach with spacecraft, could change
human history.

Gregory Benford is a professor of physics and astronomy at the
University of California, Irvine. His most recent nonfiction is Deep
Time.
________________________________________________________________

"What is the pertinent question?"

Surely, the right question it is not what was wrong before Sept.11th.
The question mark to be unravelled is why on earth the western
productive system has become all-dominant in the general pool of
genes, or memes.

The unsolved question is what makes that system so efficient, so
all-embracing that no other system or ideology can compete in this
planet's race for improving the economic well being per capita. It
must be infuriating for beleivers of so called alternative ways, to
deal with poverty and collective happiness -- in this end of live, I
mean, am not talking about afterwards or beyond.

We know a bit about the actual mechanisms of the system -- or rather,
what economists call aggregate demand. We also infere some of the
things which may influence the end product. But no attention is paid
to the type of intelligence which is at the roots of the system's
survival.

The answer might be that it is a self organizing system based on swarm
intelligence. The nearest thing to that are construction setups and
organization schemes by social insects like ants, bees and termites: A
few, very simple rules, instead of preprogramming and centralized
control; the right mixture of robustness and flexibility -- just like
DNA -- hardly any supervising body at all.

Termites of the genus Macrotermes have the added advantage of
responding, with due lags, to indirect stimulation from the
environment, and not only from other workers. This kind of termites
would quickly reduce by half the number of road accidents -- the
opposite practice of hominids -- by diverting traffic towards the
railways, just by looking at the death figures.

All this has to do with genetic knowledge. As to non-genetic factors,
two are of paramount importance: the separation of State from Religion
-- it was tantamount to a free entry ticket for everybody in the
decision making process -- and the neat distinction between Theology
and Philosophy (we call it now science); it opened the door to the
technological revolution.

Eduardo Punset is Director and Producer of "Networks," a weekly
programme of Spanish public television on Science and author of A
Field Guide to Survive in the XXI st Century.
________________________________________________________________

"How can a small number of genes build a complex mental machine?"

John McCarthy and I are from different generations (in the semester
before McCarthy invented Lisp, he taught my dad FORTRAN, using punch
cards on an old IBM) but our questions are nearly the same. McCarthy
asks "how are behaviors encoded in DNA"?

Until recently, we were not in a position to answer this question. Few
people would have even had the nerve to ask it. Many thought that most
of the brain's basic organization arose in response to the
environment. But we know that the mind of a newborn is far from a
blank slate. As soon as they are born, babies can imitate facial
gestures, connect what they hear with what they see, tell the
difference between Dutch and Japanese, and distinguish between a
picture of a scrambled face and a picture of a normal face. Nativists
like Steven Pinker and Stanislas Dehaene suggest that infants are born
with a language instinct and a "number sense". Since the function of
our minds comes from the structure of our brains, these findings
suggest that the microcircuitry of the brain is innate, largely wired
up before birth. The plan for that wiring must come in part from the
genes.

The DNA does not, however, provide a literal blueprint of a newborn's
mind. We have only around 35,000 genes, but tens of billions of
neurons. How does a relatively small set of genes combine to build a
complex brain? As Richard Dawkins has put it, the DNA is more like a
recipe than a blueprint. The genome doesn't provide a picture of a
finished product, instead it provides a set of instructions for
assembling an embryo. Those instructions govern basic developmental
processes such as cell division and cell migration; it has long been
known that such processes are essential to building bodies, and it now
is becoming increasingly clear that the same processes shape our
brains and minds as well.

There is, however, no master chef. In place of a central executive,
the body relies on communication between cells, and communication
between genes. Although the power of any one gene working on its own
is small, the power of sets of genes working together is enormous. To
take one example, Swiss biologist Walter Gehring has shown that the
gene pax-6 controls eye development in a wide range of animals, from
fruit flies to mice. Pax-6 is like any other gene in that it gives
instructions for building one protein, but unlike the genes for
building structural proteins like keratin and collagen because the
protein that pax-6 builds serves as a signal to other genes, which in
turn build proteins that serve as signals to still other genes. Pax-6
is thus a "master control gene" that launches an enormous cascade, a
cascade of 2,500 genes working together to build an eye. Humans that
lack it lack irises, flies that lack it lack eyes altogether. The
cascade launched by pax-6 is so potent that when Gehring triggered it
artificially on a fruit fly's antenna, the fly grew an extra eye,
right there on its antenna. As scientists begin to work out the
cascades of genes that build the brain, we will finally come to
understand the role of the genes in shaping the mind.

Response to Paul Davies' reply to John McCarthy

It is hard indeed to imagine that nature would endow an organism with
anything as detailed as The Cambridge Star Atlas. A typical bird
probably has fewer than 50,000 genes, but, as Carl Sagan famously
noted, there are billions and billions of stars.

Of course, you don't need to know all the stars to navigate. Every
well trained sailor knows that Polaris marks North. A
northern-hemisphere dwelling bird known as the Indigo Bunting knows
something even more subtle - it doesn't just look for the brightest
star (which could be lousy strategy on a cloudy night); instead it
looks for how the stars rotate.

Cornell ecologist Stephen Emlen proved this experimentally, by raising
buntings in a planetarium. One set of birds never got to see any
stars, a second set saw the normal pattern of stars, and a third group
saw a sneaky set of stars, in which everything rotated not around
Polaris, but around Betelgeuse. The poor birds who didn't see any
stars oriented themselves randomly (making it clear that they really
did depend on the stars rather than a built-in compass). The birds who
saw normal skies oriented themselves normally, and the ones who saw
skies that rotated around Betelgeuse oriented themselves precisely as
if they thought that Betelgeuse marked North. The birds weren't
relying on specific sets of stars, they were relying on the stars'
center of rotation.

You won't find the constellations in an indigo bunting's DNA, but you
would find in their DNA the instructions for building a biological
computer, one that can interpret the stars, taking the skies as its
input and producing an estimated direction as its output. Just how the
DNA can wire up such biological computers is my vote for the most
important scientific question of the 21st century.

Gary F. Marcus is a cognitive scientist at New York University and
author of The Algebraic Mind.
________________________________________________________________

"Why do we continue to act as if the universe were constructed from
nouns linked by verbs, when we know it is really constructed from
verbs linked by nouns?"
My question is to do with materialism, reductionism and the inertia of
intellectual progress. It is also connected with the limitations of
language as a mechanism for thought or, perhaps more accurately, of
thought as a mechanism that defines and constrains language. Above all
it is concerned with a 'process' view of the universe, which, although
frequently espoused by many of us in this group, still somehow manages
to remain trapped inside an older paradigm, like a butterfly that
can't quite break free from its chrysalis skin.
It seems to me that we intuitively, linguistically and historically
divide the world into tangible things, which we think of as real, and
intangible things, to which we usually (or latterly) accord less
respect. This is not really a valid distinction since, on closer
inspection, all supposedly solid, substantial things turn out to be
rather more ephemeral, distributed and transitive than we might like
to think. The whole edifice of the universe, it seems, is constructed
from interactions between smaller, simpler phenomena that are
themselves only patterns of interactions between even simpler
phenomena. There are no 'atoms' in the Greek sense. Our division of
the world into objects, properties and structures is an artifice to
help us deal with it, not a true description of reality. The universe
is not divided into hardware and software: there is only software.
Life and Mind are perhaps the most obvious examples of things that
subsist as pure process, but atoms, electrons, buildings and societies
are in truth no different. To some extent we already know and
understand this, and yet I think we can't stop ourselves from dividing
hardware from software and treating the former as more real and
significant than the latter. Even when we attempt to regard life and
mind in a process way we often end up reifying them again as
'information' (as if information were a kind of substance) and end up
missing the point.
Perhaps the most incapacitating aspect of our implicit reification of
natural phenomena can be seen in a malignant form of reductionism.
Benign reductionism -- trying to understand something complex by first
identifying the properties of its parts -- is a valid and powerful
tool, often the only one available to science. On the other hand, it
often leads implicitly to a belief that something complex can be
understood solely in terms of the properties of its parts, without
reference to the relationships between those parts. It can easily be
demonstrated that this is nonsense (perhaps almost the converse of the
truth), and yet much of our present failure to understand nature rests
on such a fallacy.
I believe we are edging towards a new paradigm, in which process and
interaction -- the verbs -- are all there is, and material stuff
-- the nouns -- are simply placeholders for more verbs. However, we
don't yet have suitable language or mathematics for describing this
new viewpoint, and we never will if we fail to recognise the reasons
why we so easily slip back into our old ways. Before we can construct
something new we must deliberately deconstruct what we have. So the
first question I want to ask is: how is our understanding constrained
by the apparatus we use for gaining that understanding? After that we
can start to discuss what new kinds of language and mathematics might
liberate us from this paradigm trap.
Steve Grand is an aritifical life researcher and creator of Lucy, a
robot babay orangutan. He is the author of Creation: Life and How to
Make It.
________________________________________________________________

"Is the universe a quantum computer?"
The universe is quantum mechanical, and its dynamics can be simulated
precisely and efficiently using quantum information processing. The
amount of quantum computation required to perform this simulation is
finite and has been calculated. Consequently, there is no obvious way
to distinguish the universe from a very large quantum logic circuit.

Seth Lloyd is an Associate Professor of Mechanical Engineering at MIT
and a principal investigator at the Research Laboratory of
Electronics.
________________________________________________________________

"Can wealth be distributed?"
Even with productivity showing startling increases as a consequence of
new information technologies everything suggests that the gap between
rich and poor is growing dramatically globally and even beginning to
increase again in the U.S. So much for trickle down economics.

John Markoff covers the computer industry and technology for The New
York Times and is co-author of Takedown: The Pursuit and Capture of
America's Most Wanted Computer Outlaw (with Tsutomu Shimomura).
________________________________________________________________

"Is God nothing more than a sufficiently advanced extra-terrestrial
intelligence?"
This question is based on what I call, tongue in cheek, "Shermer's
Last Law," that any sufficiently advanced extra-terrestrial
intelligence is indistinguishable from God.
As scientist extraordinaire (most profoundly as inventor of the
communications satellite) and author of an empire of science fiction
books and films (most notably 2001: A Space Odyssey), Arthur C. Clarke
is one of the most far-seeing visionaries of our time. Thus, his pithy
quotations tug harder on our collective psyches for their inferred
insights into humanity and our place in the cosmos. And none do so
more than his famous three laws:
Clarke's First Law: "When a distinguished but elderly scientist states
that something is possible he is almost certainly right. When he
states that something is impossible, he is very probably wrong."
Clarke's Second Law: "The only way of discovering the limits of the
possible is to venture a little way past them into the impossible."
Clarke's Third Law: "Any sufficiently advanced technology is
indistinguishable from magic."
This last observation stimulated me to think more on the relationship
of science and religion, particularly the impact the discovery of an
Extra-Terrestrial Intelligence (ETI) would have on both traditions. To
that end I would like to immodestly propose Shermer's Last Law (I
don't believe in naming laws after oneself, so as the good book warns,
the last shall be first and the first shall be last): "Any
sufficiently advanced ETI is indistinguishable from God".
God is typically described by Western religions as omniscient and
omnipotent. Since we are far from the mark on these traits, how could
we possibly distinguish a God who has them absolutely, from an ETI who
has them in relatively (to us) copious amounts? Thus, we would be
unable to distinguish between absolute and relative omniscience and
omnipotence. But if God were only relatively more knowing and powerful
than us, then by definition it "would" be an ETI! Consider two
observations and one deduction:
1. Biological evolution operates at a snail's pace compared to
technological evolution (the former is Darwinian and requires
generations of differential reproductive success, the latter is
Lamarckian and can be implemented within a single generation). 2. The
cosmos is very big and space is very empty ("Voyager I", our most
distant spacecraft hurtling along at over 38,000 mph, will not reach
the distance of even our sun's nearest neighbor, the Alpha Centauri
system that it is "not" even headed toward, for over 75,000 years).
Ergo, the probability of an ETI who is only slightly more advanced
than us and also makes contact is virtually nil. If we ever do find
ETI it will be as if a million-year-old "Homo erectus" were dropped
into the middle of Manhattan, given a computer and cell phone and
instructed to communicate with us. ETI would be to us as we would be
to this early hominid -- godlike.
Science and technology have changed our world more in the past century
than it changed in the previous hundred centuries. It took 10,000
years to get from the cart to the airplane, but only 66 years to get
from powered flight to a lunar landing. Moore's Law of computer power
doubling every eighteen months continues unabated and is now down to
about a year. Ray Kurzweil, in The Age of Spiritual Machines,
calculates that there have been thirty-two doublings since World War
II, and that the Singularity point may be upon us as early as 2030.
The Singularity (as in the center of a black hole where matter is so
dense that its gravity is infinite) is the point at which total
computational power will rise to levels that are so far beyond
anything that we can imagine that they will appear near infinite and
thus, relatively speaking, be indistinguishable from omniscience (note
the suffix!).
When this happens the world will change more in a decade than it did
in the previous thousand decades. Extrapolate that out a hundred
thousand years, or a million years (an eye blink on an evolutionary
time scale and thus a realistic estimate of how far advanced ETI will
be, unless we happen to be the first space-faring species, which is
unlikely), and we get a gut-wrenching, mind-warping feel for just how
godlike these creatures would seem.
In Clarke's 1953 novel Childhood's End, humanity reaches something
like a Singularity (with help from ETIs) and must make the transition
to a higher state of consciousness in order to grow out of childhood.
One character early in the novel opines that "Science can destroy
religion by ignoring it as well as by disproving its tenets. No one
ever demonstrated, so far as I am aware, the nonexistence of Zeus or
Thor, but they have few followers now."
Although science has not even remotely destroyed religion, Shermer's
Last Law predicts that the relationship between the two will be
profoundly effected by contact with ETI. To find out how we must
follow Clarke's Second Law, venturing courageously past the limits of
the possible and into the unknown. Ad astra!
Michael Shermer is the founding Publisher of Skeptic magazine and the
author of The Borderlands of Science. 
________________________________________________________________

"Is there Progress?"

I work on the question of evolution, not as it exists in Nature, but
as a formal system which enables open-ended learning. Can we
understand the process in enough detail to simulate the progress of
biological complexity in pure software or electronics? A phenomena has
appeared in many of my laboratory's experiments in learning across
many different domains like game playing and robots. We have dubbed it
a "Mediocre Stable State." It is an unexpected systematic equilibrium,
where a collection of sub-optimal agents act together to prevent
further progress. In dynamical systems, the MSS hides within cycles of
forgetting that which has been already been learned.

When a MSS arises, instead of achieving creativity driven by merit
based competition, progress is subverted through unspoken collusion.
This occurs even in systems where agents cannot "think" but are
selected by the invisible hand of a market. We know what collusion is:
the two gas stations on opposite street corners fix their prices to
divide the market. Hawks on both sides of a conflict work together to
undermine progress towards peace. The union intimidates the
pace-setter, lest he raise the work standards for everyone else. The
telephone company undercapitalizes its own lucrative deployment of
broadband, which might replace toll collection. Etc.

As a scientist with many interests in High Technology, of course I
know there is progress. I am witness to new discoveries, new
technologies, and the march of Moore's law. Clearly, the airplane,
long distance communication, and the computer are revolutionarily
progressive in amplifying human commerce, communication and even
conflict. But these scientific and technological advances stand in
stark contrast to the utter depressing lack of progress in human
affairs.

Despite the generation of material wealth, health breakthroughs, and
birth control methods which could end want and war, human social
affairs are organized almost exactly the way they were 500 years ago.
Human colonies seem -- like ant colonies and dog packs -- fixed by our
genetic heritage, despite individual cognitive abilities. In fact, it
is difficult to distinguish anymore between Dictatorships,
Authoritarian Regimes, Monarchies, Theocracies, and Kleptocracies, or
even one-party (or two party oscillatory) democracies. When labels are
removed, it looks as if authority and power are still distributed in
hierarchical oligarchies, arranged regionally. Stability of the
oligarchic network is maintained by complex feedback loops involving
wealth, loyalty, patronage, and control of the news.

Of course, I'm not against stability itself! But when patronage and
loyalty (the collusion of the political system) are rewarded more than
competitive merit and excellence, progress is subverted.

The 90's really felt like progress to me, especially with visible
movement towards peace in certain regions of the world and an
unparalleled creative burst in our industry. But now its like we've
just been memory bombed back to the 1950's. The government is printing
money and giving it to favored industries. We are fighting an
invisible dehumanized enemy. War is reported as good for the economy.
Loyalty to the fatherland must be demonstrated. One Phone Company to
rule us all. An expensive arms race in space. And law breaking secret
agents are the coolest characters on TV.

Havent we been here before? Haven't we learned anything?

Jordan B. Pollack is a computer science and complex systems professor
at Brandeis University who works on AI, Artificial Life, Neural
Networks, Evolution, Dynamical Systems, Games, Robotics, Machine
Learning, and Educational Technology.
________________________________________________________________

"Can there be a science of human potential and the good life?"
Despite monumental advances in brain and behavioral sciences, nothing
like a science of human potential and the good life has yet emerged.
This seems ironic in an age of unprecedented wealth, yet one that also
has chronically high levels of stress and life dissatisfaction.

My hunch is that there's not yet a science of human potential and the
good life because such concerns are only just now moving from the
realm of humanistic thinking to ones being informed by science. Much
of my research lies at the interface between humanities and brain
science, as my collaborators and I address basic issues regarding how
enduring questions about the quality of human life can be informed by
brain science.

In my primary research, I ask, what is the neural basis of human
intelligence, and how can our understanding of brain development and
plasticity be used to construct more effective learning environments?
With Gabrielle Starr, an English professor at NYU and Anne Hamker here
at Caltech, we are asking, what is the brain basis of aesthetic
experience, and how can such an understanding be used to deepen our
emotional life? With Michael Dobry, co-director of the graduate
industrial design program at the Art Center College of Design, we are
asking, what is the relation between design and the brain, and how can
the design of daily life be more in line with the brain's capacities?

Ultimately, a science of human potential and of the good life must
help explain how these human capacities can be actualized in contexts
that confer significance and dignity to individual life.

Steven R. Quartz is Director of the Developmental Cognitive
Neuroscience Laboratory at the California Institute of Technology.
________________________________________________________________

"Why is religion so important to most Americans and so trivial to most
intellectuals?"
Is it just a matter of IQ? (Though I thought intellectuals no longer
believed in IQ...) But empirically it can't be an IQ issue, because so
many of history's greatest minds based their lives on religion -- from
Michaelangelo or Bach to Spinoza or Dante or Kant. Do modern
intellectuals actually believe that all such people are naively
deluded? Or could they be missing something themselves?

David Gelernter is a professor of computer science at Yale, chief
scientist at Mirror Worlds Technologies and author of Drawiing a Life:
Surviving the Unabomber. 
________________________________________________________________

"What, me worry?"
This question, which has been asked by many, is now usually attributed
to Alfred E. Newman, the poster boy of Mad Magazine. His face tells it
all -- a composite of attractive merriment and troublesome
mindlessness. Who doesn't want to feel like smiling all the time? But
at what price?

Psychiatrists know that some people have pathological forms of worry.
There are names for this such as obsessive-compulsive disorder and
generalized anxiety disorder; and treatments, such as psychotherapy
and Prozac. But what about the rest of us? What is the optimal balance
between worry and contentment? Should we all be offered some kind of
training to help us achieve this optimal balance? And how should we
apply our growing understanding of the brain mechanisms that control
these feelings?

Samuel Barondes is a professor and director of the Center for
Neurobiology and Psychiatry at the UC-San Francisco and author of Mood
Genes: Hunting for Origins of Mania and Depression.
________________________________________________________________

"What is the missing ingredient -- not genes, not upbringing -- that
shapes the mind?"

We know that genes play an important role in the shaping of our
personality and intellects. Identical twins separated at birth (who
share all their genes but not their environments) and tested as adults
are strikingly similar-though far from identical-in their intellects
and personalities. Identical twins reared together (who share all
their genes and most of their environments) are much more similar than
fraternal twins reared together (who share half their genes and most
of their environments). Biological siblings (who share half their
genes and most of their environments) are much more similar than
adopted siblings (who share none of their genes and most of their
environments).

Many people are so locked into the theory that the mind is a Blank
Slate that when they hear these findings they say, "So you're saying
it's all in the genes!" If genes have any effect at all, it must be
total. But the data show that genes account for about only about half
of the variance in personality and intelligence (25% to 75%, depending
on how things are measured). That leaves around half the variance to
be explained by something that is not genetic.

The next reaction is, "That means the other half of the variation must
come from how we were brought up by our parents." Wrong again.
Consider these findings. Identical twins separated at birth are not
only similar; they are "no less" similar than identical twins reared
together. The same is true of non-twin siblings -- they are no more
similar when reared together than when reared apart. Identical twins
reared together -- who share all their genes and most of their family
environments-are only about 50% similar, not 100%. And adopted
siblings are no more similar than two people plucked off the street at
random. All this means that growing up in the same home -- with the
same parents, books, TVs, guns, and so on -- does not make children
similar.

So the variation in personality and intelligence breaks down roughly
as follows: genes 50%, families 0%, something else 50%. As with Bob
Dylan's Mister Jones, something is happening here but we don't know
what it is.

Perhaps it is chance. While in the womb, the growth cone of an axon
zigged rather than zagged, and the brain gels into a slightly
different configuration. If so, it would have many implications that
have not figured into our scientific or everyday way of thinking. One
can imagine a developmental process in which millions of small chance
events cancel one another out, leaving no difference in the end
product. One can imagine a different process in which a chance event
could derail development entirely, making a freak or monster. Neither
of these happens. The development of organisms must use complex
feedback loops rather than blueprints. Random events can divert the
trajectory of growth, but the trajectories are confined within an
envelope of functioning designs for the species defined by natural
selection.

Also, what we are accustomed to thinking of as "the environment" --
namely the proportion of variance that is not genetic -- may have
nothing to do with the environment. If the nongenetic variance is a
product of chance events in brain assembly, yet another chunk of our
personalities and intellects would be "biologically determined"
(though not genetic) and beyond the scope of the best laid plans of
parents and society.

Steven Pinker, research psychologist, is professor in the Department
of Brain and Cognitive Sciences at MIT and author of Words and Rules.
________________________________________________________________

"When will our souls be upgraded?"
If, as Harold Bloom puts it, Shakespeare invented the modern soul, if
we are the way we are because Shakespeare existed as a writer, the
question arises, whether this historic progression has come to an end
and will soon be replaced by a new version of 21st century souls.

The Shakespearean soul will not be able to cope with the innovations
and insights of the near future. Star Wars, Star Trek, even Gibson
might prove unrealistic -- not because of their description of
hardware, but because of their description of the soul.

Frank Schirrmacher is Publisher, Frankfurter Allgemeine Zeitung and
author of Die Darwin AG. 
________________________________________________________________

"Is it conceivable that the standard curriculum in science and math,
crafted in 1893, will still be maintained in the 26,000 high schools
of this great nation?"

The world is caught up in a paroxysm of change. Key words: globalism,
multinational corporations, ethical influences in business, explosive
growth of science-based technology, fundamentalism, religion and
science, junk science, alternative medicines, rich vs. poor gap, who
supports research, where is it done, how is it used, advances in
cognition science, global warming, the disconnect between high school
and college....these and other influences are undergoing drastic
changes and all will have some impact on science, mathematics and
technology and therefore on how our schools must change to produce
graduates who can function in the 21st century...function and assume
positions of leadership. Is it conceivable that the standard
curriculum in science and math, crafted in 1893, will still be
maintained in the 26,000 high schools of this great nation?

This is a question that obsesses me in my daily activities. I have
been agonizing over it along with a few colleagues around Fermilab,
University of California, and the students, staff and trustees of the
Illinois Math Science Academy (IMSI), a three year public residential
high school for gifted students, I was involved in founding some 16
years ago.

Is not our nation even more at risk now than ever? Are not our 2
million teachers even more poorly trained now, even less respected,
hardly better compensated than when we were A Nation at Risk? Some 13
years ago, the collected Governors of the United States under the
leadership of the President made six promises, all starting with: "By
the year 2000 all students will....".

The rhetoric varies from high comedy to dark tragedy. Today, the Glenn
National Commission summarizes its dismal study of science and math
education in a succinct title: Before its Too Late. Alan Greenspan
mesmerizes a congressional panel on Education and the Work Force with
the warning that if we do not radically improve our educational
system, there is a danger to the future of the nation. Words carefully
chosen. Rhetoric. We have no national strategy to address this
question. In a war on ignorance and on looming changes of unknowable
dimensions, shouldn't we have a strategy?

Leon M. Lederman, the director emeritus of Fermi National Accelerator
Laboratory, has received the Wolf Prize in Physics (1982), and the
Nobel Prize in Physics (1988). He is the author (with Dick Teresi)of
The God Particle: If the Universe Is the Answer, What Is the Question?
________________________________________________________________

"In view of globalization, which is here to stay, and the events of
September 11and its aftermath, which were a shock to most of us, do we
need to make fundamental changes in our educational goals and
methods?" 
Precollegiate education has been remarkably consistent over the
decades: literacy in the primary years, initial mastery of a few major
subject areas (math, science, history, language, perhaps in the arts)
in middle and secondary school. We could take the position that we
know how to do this and should just stick to our guns. I don't agree.

Because of globalization, the capacity to think across disciplines, to
synthesize wide ranges of information efficiently and accurately, to
deal with individuals and institutions with which one has no personal
familiarity, to adjust to the continuing biological and technological
revolutions, are at a far greater premium. And because of the events
of September 11, we need to think much more deeply about the nature of
democratic institutions and the threats to them, the role and limits
of tolerance and civil liberties, the fate of scarce resources,
profound gaps across religions and cultures, just to name a few.

The time has come where we need to rethink what we teach, how we
teach, what young people learn on their own, how they interact, how
they relate to mass culture, etc. The question we must then ask is: Do
we have to continue to be reactive or can we plan proactively the
education that is needed for our progeny in this new world?
Howard Gardner is Professor of Cognition and Education at Harvard
University at the co-author (with Mihaly Csikszentmihalyi and William
Damon) of Good Work: When Excellence and Ethics Meet.
________________________________________________________________

"When is it time to stop calculating risk and rewards, and just go
ahead and do what you know is right?"

In the world we live in, mathematicians and investors have become ever
better at calculating risks, assessing outcomes, laying out possible
scenarios. But real economic progress comes from taking challenges,
not risks, and building something fantastic *despite* the odds,
because you know you're smarter and more dedicated and more
persistent, and you can gather and lead a better team, than any
rational calculation would indicate. That's how new businesses get
built, new markets get opened, new value gets created.

And real political, social and ethical progress, likewise, comes not
just from negotiating a carefully calibrated "win-win"
balance-of-power compromise, matching move for move, but from taking
the lead, challenging the other guy to follow, showing the way
forward. We make progress by stretching the imagination and doing
things we won't regret. When you cannot predict consequences, then you
need to consider your conscience and do what's right.

We need not calculation, but courage!

Esther Dyson is president of EDventure Holdings and editor of the
computer-industry newsletter, Release 1.0, and author of the book,
Release 2.1: A Design for Living in the Digital Age.
________________________________________________________________

"The hows and whys of what led to us"

There are, it seems to me, just two fundamental scientific questions
that, for very different reasons, we may have no possibility of
answering with any certainty.

One question is so fundamental that it is arguably not a scientific
question at all: It's the big how and why question of existence
itself. Although there are many technical questions still to be
answered, as a mathematician, I find myself broadly content with
science's explanation of how the physical universe -- including time
itself -- sprang into being: the symmetry breaking, primordial
fireball we call the Big Bang, followed by the subsequent evolution
into the universe we see today. But that is simply an explanation of
the mechanics of the universe of our experience and perception. It
leaves us with a lingering question of how, and perhaps why, the
framework arose in which the Big Bang took place in the first place --
be that framework one in which our universe is the only one there is
and has ever been, or one that cycles in "universe time" (whatever
that is), or maybe some kind of multiple universe scenario.

I accept that this is not really a scientific question. Science only
addresses the how of our own universe, starting just after the Big
Bang. But my curiosity, both as a scientist and more generally just as
a thinking person, cannot help but dwell from time to time on the
biggest question of all -- the question that for those having a deep
religious faith seems to find an answer in the phrase "God made it
that way." (An answer that I find even more incomprehensible in a
world where millions of human beings believe that that same God
authorizes his chosen emissaries to fly jet airliners full of humans
into buildings full of other humans.)

My second fundamental question is clearly a genuine scientific matter.
In fact, it is a technical question about evolution by natural
selection. Exactly how and why did a species (namely, us) develop that
has the capacity to think abstractly, that possesses language, and
that can reflect on its own existence? Like the big existence issue,
this is a question that has enormous significance for us, as humans.
And that makes it the more frustrating that we may find ourselves
unable ever to answer it with any certainty.

In my recent book The Math Gene, I summarized arguments to show that
the possession of language (i.e., a symbolic communication system with
a recursive grammatical structure allowing for the production and
comprehension of meaningful utterances of unlimited length) and the
ability for "offline" thinking (reasoning about the world in the
absence of direct input from the environment and without the automatic
generation of a physical response) are two sides of the same coin.
Implicit in that argument is that this ability also brings with it the
capacity for self-reflective, conscious thought. (I also argued that
such a mental capacity also yields the potential for mathematical
thought.) Thus, we are talking here about the capacity that makes us
human, and in so doing makes us very different from any other species
on Earth.
The best evidence we have from anthropology is that our ancestors
acquired this capacity some time between 75,000 and 200,000 years ago.
(The evidence is in the form of manufactured artifacts that early
humans left behind, which indicate such a level of abstract thinking
and communication.) But how -- and in terms of natural selection, why
-- did our ancestors acquire this capacity? All we know for sure is
that it came at the end of a three-and-a-half-million year period in
which the average brain size of our ancestors grew to roughly its
present level, approximately nine times larger than is normal for a
mammal of our body size and about twice that of a present-day ape.

What makes this question particularly hard is that, at least in terms
of functionality (as opposed to brain structure), the acquisition of
syntactic structure (i.e., the structure that enables us to create
complex sentences or to reason abstractly about the world) is an
all-or-nothing event. As linguists have pointed out, you cannot have
"half a grammar". True, in theory you can have grammars without, say,
passive constructions, but there is no chain of gradually more complex
grammars that starts with protolanguage -- simple subject-predication
utterances -- and leads continuously to the grammatical structure that
is common to all human languages. The chain has to start with a sudden
jump. Although the acquisition of language was a major functional
change in brain capacity, there is no reason why that jump was not the
result of a tiny structural change in the brain. But what propelled
the brain to reach a stage where such a change could occur? And what
exactly was that small structural change? This would surely be a minor
technical question about one detail, among thousands, of evolutionary
history, were it not for the fact that it was this single change that
made us human -- that made it possible for us to ask these how and why
questions, and to care about the answers.

One oft-repeated suggestion for the natural selection advantage that
language provides is that it enabled the communication of more complex
thoughts and ideas than was previously possible. But that suggestion
falls down immediately when you realize that such communication can
only arise when the brain that is doing the communicating is able to
form those complex thoughts and ideas in the first place, and that
capacity itself requires a brain having grammatical structure.

It seems likely that the two sides of this particular coin, thinking
complex thoughts and communicating them, arose at the same time, and
indeed it could have taken both aspects together to spur the
development that led to their acquisition. But we are still left with
the tantalizing question that the obvious natural selection advantages
this capacity provides only came into play after the capacity was in
place. Just what led to and prompted that jump remains a mystery.

There has been, as you might imagine, no shortage of attempts to
provide an explanation, but so far I haven't seen one that I find
convincing, or even close to convincing. (I mention some in The Math
Gene, and give pointers to further reading on the matter.) And even if
someone produces a compelling explanation, it seems we will never know
for sure. When our early ancestors died, their brains rapidly rotted
away, leaving nothing but the skulls that contained them. And even if,
by some fluke, we found an intact brain from some early ancestor,
buried deep in the ice of a glacier somewhere, how could that help us?
Dissecting an object as complex as the human brain tells us virtually
nothing about what that brain did -- how it thought and what it
thought about.

Our higher brain functions could just have been an accident. Of
course, all evolutionary changes are accidents. What I mean here is
that it may be purely accidental that the structural change in the
brain that gave us language and abstract, symbolic thought did in fact
have that effect. It might just be, as some have suggested, that the
brain grew in complexity as a device for cooling the blood, and that
language and symbolic thought are mere accidental by-products of the
body's need to maintain a certain temperature range. (Certainly, the
brain is an extremely efficient cooling device, as illustrated by the
fact that putting on a hat is an extremely efficient way of staying
warm when we go skiing.) Personally, I don't buy the cooling mechanism
explanation. But unless and until someone comes up with something more
convincing, I see no way we can rule it out.

For all our huge success in telling the story of how life began and
evolved to its present myriad of forms, it seems likely that we may
never know for certain exactly what it was that gave us the one thing
we value above all else, and the thing that makes us human: our minds.
If there is one question I would like to answer above all others, it
is this one.

Keith Devlin, mathematician, is a Senior Researcher at Stanford
University, and author ot The Math Gene.
________________________________________________________________

"How different could minds be?"

Plato believed that human knowledge was inborn. Kant and Peirce agreed
that much of knowledge had to exist prior to birth or it would be
impossible to understand or learn anything. Until quite lately,
psychologists were almost uniformly opposed to this notion, insisting
that only process not content could be part of our native equipment.
Piaget was typical (and highly influential) in asserting that only
learning skills and inferential procedures such as deductive rules and
schemes for induction and causal analysis were native. He also
maintained that these were identical for all people with undamaged
minds, and that development of such processes ended with adolescence.
Content could be almost infinitely variable because these processes
operate on different inputs for different people in different
situations and cultures.

But recent work by psychologists provides evidence that some content
is universal and native. Theories of mechanics are present by the age
of three months and highly elaborated theories of mind and make their
appearance before the age of four, are universal, and may also be
native. Some anthropologists maintain that schemes for understanding
the biological world and even some for understanding the social world
are universal and native, as are some knowledge structures for
representing the spirit world.

Psychologists  and philosophers in this case as well  may turn out to
be wrong in assuming that all mental processes are universal, native
and unalterable. Though early in the 20th century there were claims by
Soviet psychologists Vygotsky and Luria that cognitive processes were
historically rooted, differentiated by culture, and alterable by
education, they were largely ignored. But findings have cropped up
from time to time that fit these assertions. Deductive rules may be a
trick learned in the process of Western-style education; rational
choice procedures may be applied primarily by economists and only in
very limited domains by lay people; statistical rules (Piaget's
"probability schema") may be used only to a very slight extent by
non-Western peoples.

Authors of this year's questions have asked how radical the
differences among universes, mathematical systems, and kinds of life
might be. How radical could the differences among humans be in basic
knowledge structures and inferential procedures? What has to be shared
or even inborn? What can be allowed to vary?

Richard Nisbett is Professor of Psychology and Co-Director of the
Culture and Cognition Program at the University and author numerous
books.
________________________________________________________________

"Can democracy survive complexity?"

As any parent of adolescents has probably experienced, life has become
sufficiently complex that emotional maturity by the end of teen years
is a thing of the distant past. If adolescence would only be over by
25!

More seriously, for democracy to function representatives need to make
critical value trade-offs for citizens. But how can citizens send
messages on how they would like their values to drive policies when
the issues are so complex that very few citizens -- and not too many
politicians either -- really understand enough of what might happen
and at what probabilities to know how to make decisions that do
optimize the value signals from citizens.

The ultimate in irrationality is to make a decision that doesn't even
advance your values because the situation is so complex that the
decision makers -- or the public -- can't see clear connections
between specific policies and their potential outcomes (as one who
works on the global warming problem I see this conundrum all the
time).

The capacity to be literate about scientific and political
establishments and their disparate methods of approaching problems is
a good start, but such literacy is not widespread and the complexity
of most issues sees public and decision-makers alike disconnected from
core questions. Educational establishments often call for more content
in curriculum to redress this issue, but I think more understanding of
context of scientific debate and political and media epistemologies
will go further to build the needed literacy.

Stephen H. Schneider is Professor in the Biological Sciences
Department at Stanford University and author of Laboratory Earth.
________________________________________________________________

"What Is Real?"

The question of what is "real," defined here as the physical universe,
acquires special subtlety from the perspective of brain and cognitive
science. The question goes beyond semantic quibbling about the
difference between physical stimuli and our perception of them.
(Consider the old question, "If a tree falls in the forest, was a
sound made if no one is present to hear it?" The answer is "no,"
because a sound is a sensation that must be perceived by an observer,
and no observer was present to hear it.) The startling truth is that
we live in a neurologically generated, virtual cosmos that we are
programmed to accept as the real thing. The challenge of science is to
overcome the constraints of our kludgy, neurological wetware, and
understand a physical world that we know only second-hand. In fact, we
must make an intuitive leap to accept the fact that there is a problem
at all. Common sense and the brain that produces it evolved in the
service of our hunter-gatherer ancestors, not scientists.

Sensory science provides the most obvious discrepancies between the
physical world and our neurological model of it. Consider these
physical to perceptual transformations: photons stimulate the
sensations of light and color; chemicals produce tastes and odors; and
pressure changes become sounds. Yet, there is no "light" or "color" in
the wave or photon structure of electromagnetic radiation, no "sweet"
in the molecular structure of sugar, no "sound" in pressure changes,
etc. The brain produced these sensory attributes. Sensation is the
arbitrary experience that is correlated with a physical stimulus, but
is not the physical stimulus itself. Our brain manages these
psychophysical transformations in such a convincing manner that we
seldom consider that we are sensing a neurological simulation, not
physical reality. When do we question the physical meaning of "blue,"
"pain," or "B-flat?" Consider also the apparent seamlessness of the
reality illusion. Using a visual metaphor, our sensory environment is
like that of a person trapped in a tiny house, through which the
universe must be viewed through peep-holes, one per each sensory
channel, such as vision, taste, hearing, etc. From this limited, peep
hole vista, we synthesize a seamless, noisy, bright, flavorful,
smelly, three dimensional panorama that is an hypothesis of reality.
The peep-hole predicament is invisible to us. (Some animals have
peep-holes we lack, such as those associated with electric or magnetic
field perception.)

Sensory examples are instructive because the nature of the
psychophysical linkage is relatively clear. It's easy to imagine
sensory limits of bandwidth (the size of our "peephole"), absolute
sensitivity, or even modes of sensitivity (our "peep-holes").
Neurological limits on thinking may be as common as those on sensing,
but they are more illusive -- it's hard to think about what you can't
think about. A good example from physics is our difficulty in
understanding the space-time continuum -- our intellect fails us when
we move beyond the dimensions of height, width, and depth. Other
evidence of our neurological reality-generator is revealed by its
malfunction in illusions, hallucinations, and dreams, or in brain
damage, where the illusion of reality does not simply degrade, but
often splinters and fragments.

Why am I interested in this question? As a neuroscientist, I want to
understand how the brain evolved, developed, and functions. As a
biologist, I believe that all organisms are a theory of their
environment, and it's necessary to understand that environment. As an
amateur astronomer and cosmologist, I want to know the universe in
which I live. To me, physics, biology, neuroscience and psychology are
different approaches to a similar set of perceptual problems. It's no
coincidence that Herman Helmholtz, a great physicist of the past
century, appreciated that you can never separate the observer from the
observed, and became a founder of experimental psychology. The
distinction between psychology and physics is one of emphasis. The
time has come for experimental psychologists to return the favor and
remind physicists that they should be wary of confusing the physical
world with their neurologically generated model of it. The frontiers
of physics may be an exciting playground for the adventurous cognitive
scientist. Ultimately, physics is a study of the behavior of
physicists, scientists trying as best they can to understand the
physical world. The intellectual prostheses of mathematics, computers,
and instrumentation loosen but do not free our species of the
constraints of its neurological heritage. We do not build random
devices to detect stimuli that we cannot conceive, but build outward
from a base of knowledge. A neglected triumph of science is how far we
have come with so flawed an instrument as the human brain and its
sensoria. 
Robert R. Provine is Professor of Psychology and Neuroscience at the
University of Maryland and author of Laughter: A Scientific
Investigation.
________________________________________________________________

"Is there, or should we expect, a fracture in the logical basis on
which people now look for a description of the nexus between particle
physics and cosmology?"
Question: Since the 1930s, we have had to live with Godel's theorem --
the apparently unshaken proof by the logician Kurt Godel that there
can be no system of mathematical logic that is at once consistent (or
free from contradictions) and complete (in the sense of being
comprehensive). The question is whether there is, or whether we should
expect, such a fracture in the logical basis on which people now look
for a description of the nexus between particle physics and cosmology.

Why: The chief interest of Godel's theorem is that it is a negative
answer to one of the questions in David Hilbert's celebrated list of
tasks for the twentieth century, put forward at the International
Mathematics Congress in Paris in 1900. Mathematicians in the
succeeding century seem not to have been unduly incommoded by Godel.
But if there were a comparable theorem in fundamental physics, we
should have more serious difficulties. Perhaps the circumstance that
string theory is getting nowhere (not fast, but slowly) should be
taken as a premonition that something is amiss. The search for a
Theory of Everything (latterly gone off the boil) may be logically the
wild goose chase it most often seems. If science had to abandon the
principle that to every event, there is a cause (or causes) , the cat
would really be among the pigeons.

Moral: Godel's theorem needs seriously to be re-visited, so that the
rest of us can properly appreciate what it means.

Sir John Maddox who recently retired having served 23 years as the
editor of Nature, is a trained physicist, and author of What Remains
to be Discovered: The Agenda for Science in the Next Century.
________________________________________________________________

"Are space, time, and all other physical quantities only relational?"
What do we actually know about the physical world after the scientific
revolution of the last century? Before the XXth century, the picture
of the physical world was simple: matter formed by particles (and
fields) moving in time over the stage of space, pushed and pulled by
forces, according to deterministic equations, which we could write
down. That's it.. But the 20th Century has changed all that in depth.
Matter has quantum properties: particles can be delocalized -as if
they were clouds- although they manifest themselves always as a single
point when interacting with us. Space and time are not just curved:
they are dynamical entities very much like the electric and magnetic
field. Is there a new consistent picture of the physical world, that
takes all this new knowledge into account?

The most remarkable aspect of quantum theory is its relational
character: elementary quantum events (such as a certain quantum
particle being "here") only happen in interactions, and, in a precise
sense they are only "real" with respect to, or in relation with,
another system. Indeed, I can see the particle "here", but at the same
time the particle and I can be in a quantum superposition in which the
particle has no precise localization. Thus, a quantum particle is not
just "here", but only "here for me".

On the other hand, the most remarkable property of general relativity
is that localization in space and time are not defined. Things are
only localized with respect to other things. In fact, the spacetime
coordinates have no meaning in general relativity, and only quantities
that are independent of these coordinates (such as relative
localizations) have physical meaning.

Now the question is: are the quantum relationalism (quantum systems
have definite properties only in nteracting with other systems) and
the general relativistic relationalism (position is only relative)
connected to each other? Are they indeed two aspects of the same
relationalism?

There is clearly some deep connection. In order to interact quantum
mechanically, two systems must be close in space and time, and,
viceversa, spacetime contiguity can only be checked via a quantum
interaction. So, is perhaps spacetime just the geography of the net of
the quantum interactions? Is the world just made of relations?

We are far from understanding all this, and the current highly
speculative physical theories haven't even started addressing this
kind of questions. But until we address these questions -which are the
interesting ones in physics for me- the great revolution of the 20th
century is not over. We have lost the old picture of the physical
world, but we haven't a new credible one yet.

Carlo Rovelli is a theoretical physicist at the Centre de Physique
Theorique in Marseille, France.
________________________________________________________________

"Why bother? Or: Why do we go further and explore new stuff?"

Many human skills enable an individual to do something with less
physiological effort. If you are good at skiing (and I am not) it
takes less energy to climb that mountain. One can even argue
forcefully that a mental "understanding" of a phenomenon allows one to
perceive it with less increase in brain metabolism.
But not all skills are directed at a reduction of the expenditure.
Many creative activities involve a huge effort to explore new issues
or phenomena. The better skier goes beyond the first mountain. New
worlds and ideas are explored. Why do we bother -- or why do some of
us bother?

One could argue that we explore new phenomena to produce skilful
insights that will in the future allow us to visit the same phenomena
again with less effort. But is that really enough? Can such a
functional explanation of creativity as an initial effort devoted to
enable a future reduction of the effort really capture the reasons for
people to involve themselves in lifelong efforts to understand the
world of ants or the intricacy of ski dope?

It seems that president John F. Kennedy captured an essential element
in creative efforts when he, in his famous speech at Rice University
in 1962, argued for the decision to create the Apollo program: "We
choose to go to the moon in this decade and do the other things, not
because they are easy, but because they are hard, because that goal
will serve to organize and measure the best of our energies and
skills..."

Indeed, the most important outcome of Apollo -- offering earthlings an
outside view of their planet, visualizing the vulnerability of the
Earth and its biosphere -- was an unintended result of doing a major
effort. It did pay off to do something hard. Somehow we know that
doing something hard, rather than something easy, is fruitful. But we
also know that doing it the hardest way possible (like when I ski) is
not a very efficient way of getting anywhere.

We want to be efficient, but also to do difficult things. Why? In a
sense this is a rephrasing of Brian Eno's question in Edge 11: "Why
Culture"? Many different approaches can be taken involving different
disciplines such as economy, anthropology, psychology, evolutionary
biology etc.

An idea currently explored in both economy and evolutionary biology
could be relevant: Costly signals. They provide the answer to the
question: How does one advertise one's own hidden qualities (in the
genes or in the bank) in a trustworthy way? By giving a signal that is
very costly to produce. One has to have a strong bank account, a very
good physiology (and hence good genes) or a strong national R&D
programme to do costly things. The more difficult, the better the
advertising.

Perhaps we bother because we want to show that we are strong and
worthy of mating? Culture is all about doing something that is so
difficult that only a healthy individual or society could do it.

If so, it's not at all about reducing the effort, it's all about
expanding the effort.

Tor Nørretranders is a science writer, consultant, lecturer and
organizer based in Copenhagen, Denmark and author of The User
Illusion: Cutting Consciousness Down to Size.
________________________________________________________________

"Why do people kill other people?"

No offense against another human being inflicts greater costs than
killing. Simply put, it's bad to be dead. Nonetheless, hundreds of
thousands are murdered every year; tens of millions over the past
century. From baby killing to genocide, from Susan Smith to Osama bin
Laden, people in every culture experience the urge to kill. Some act
on it. They do so despite legal injunctions, religious prohibitions,
cultural interdictions, the risk of retaliation, and the threat of
spending life in a cage. Dead bodies, a trail of grief, and a thirst
for vengeance lie in their wake.

Many believe that they already know the answer to the question of
cause. But existing theories woefully fail to explain why people
murder. Theories that invoke violent media messages, for example,
cannot explain the high rates of homicide among tribal cultures that
lack media access. Theories that invoke uniquely modern causes cannot
explain the paleontological record -- ancient skulls and skeletons
that contain arrow tips, stone projectiles, and brutally inflicted
fractures. The stones and bones of the past leave no doubt that murder
has been a persistent problem of social living throughout human
history. We need to understand why.

David M. Buss is Professor of Psychology at the University of Texas,
Austin, and author of Evolutionary Psychology: The New Science of the
Mind.
________________________________________________________________

"What is the difference between the sigmundoscope and the
sigmoidoscope? Less cryptically, how is everyday narrative
logic different from extensional mathematical logic?
It differs in countless ways, most of them poorly understood. It
generally deals with individuals rather than analyses or averages,
with motives and reasons rather than movements and causes. It has a
point of view rather than a "view" from nowhere. As the writer's maxim
says, it shows rather than tells, contains dialogue rather than only
declarative sentences, relies on context rather than raw data alone,
is open-ended and metaphorical rather than determinate and literal, is
tied to a particular time rather than being timeless, and deals with
emotions rather than impersonal facts. Furthermore, narrative logic
must deal with the notion of "common knowledge," whereby two or more
people know something, know that the others know it, know that the
others know that others know, and so on. In short, narrative logic is
much harder than mathematical logic.

In everyday "story logic," how "we," the story-tellers, characterize
an event or person is crucial. If a man touches his hand to his
eyebrow, for example, we may see this as an indication he has a
headache. We may also see the gesture as a signal from a baseball
coach to the batter. Then again, we may infer that the man is trying
to hide his anxiety by appearing nonchalant, that it is simply a habit
of his, that he is worried about getting dust in his eye, or
indefinitely many other things depending on indefinitely many
perspectives we might have and on the indefinitely many human contexts
in which we might find ourselves. A similar open-endedness
characterizes the use of probability and statistics in surveys and
studies.

Furthermore, unlike mathematical logic, story logic does not allow for
substitutions. In mathematical contexts, for example, the number 3 can
always be substituted for the square root of 9 or the largest whole
number smaller than the constant without affecting the truth of the
statement in which it appears. By contrast, although Lois Lane knows
that Superman can fly, and even though Superman equals Clark Kent, the
substitution of one for the other can't be made. Oedipus is attracted
to the woman Jocasta, not to the extensionally equivalent person who
is his mother. In the impersonal realm of mathematics, one's ignorance
or one's attitude toward some entity does not affect the validity of a
proof involving it or the allowability of substituting equals for
equals.

John Allen Paulos is Professor of mathematics at Temple University
adjunct professor of journalism at Columbia University, and author
Once Upon a Number. 
________________________________________________________________

"How much can we expect the social sciences to help build a just and
free society?"
Marx and Engels argued for "scientific socialism", that is, for a
political movement that would bring about a just and free society with
the help of science. No need to recall how the movements that they
inspired either failed to achieve much, or succeeded in establishing
societies tragically lacking in justice and freedom. Was the science
insufficiently scientific, or was the very idea of a scientific
socialism flawed? I became a social scientist (and then a cognitive
scientist and a philosopher) out of the conviction that what was
lacking in scientific socialism was a proper science of society. I
gave myself the goal of contributing to the development of a truly
scientific programme in the social sciences. Today, I believe some
significant steps have been taken in this direction, in particular by
beginning to bridge the gap between the social sciences and the
cognitive and, more generally, the natural sciences. But does this
bring us anywhere nearer, not "scientific socialism" (clearly an
obsolete notion), but, more generally, the possibility of using the
social sciences for radically bettering our world?

Most people understand the social relationships and institutions in
which they participate well enough to get the most (which often is not
much) out of their participation. The social sciences are, for the
most part, a systematized, de-parochialized, professionalized version
of this competence that we all have, to a smaller or greater extent,
as social actors. As such, the social sciences help us improve our
understanding of the social world; they help better understand in
particular the point of views of other actors in the same society and
of people in other societies. But this enhanced understanding is still
shallow, and strikingly weak in predictive power. It is, as far as
informing political action, little more than serious journalism
without the time pressure. The events of last September provide a
telling illustration: What did social scientists have to contribute to
our understanding of the events? Did interpretive anthropologists
provide a much deeper understanding of the fundamentalist terrorists?
Did sociologists give well argued and unexpected predictions as to how
the target societies would react? No, the contribution of social
scientists was, to say the least, modest. Still, the role of the
social sciences as enhancers of common sense social understanding may
be modest, but it is crucial in helping people overcome prejudices and
biases, and become better citizens not just of their own country, but
of the world. Immodest social scientists that presume say what is to
be done should not be easily believed.

But might, in the future, a more scientific social science emerge
(probably alongside, rather than in place of, the more common sense
social sciences that we know)? Its role would not be to ground
political action -- it is not the role of science to say what is good
and what is bad -- but to inform it well enough so that more daring
long-sighted political action could be undertaken -- action that might
help build a more just and freer society --without being all too
likely to have its unforeseen consequences compromise its initial
goals, as happened with communism? This is my question. I don't know
the answer.

Dan Sperber is a social and cognitive scientist at the French Centre
National de la Recherche Scientifique (CNRS) in Paris and author, with
Deirdre Wilson, of Relevance: Communication and Cognition. 
________________________________________________________________

"Why do people like music?"
People from every culture like listening to some kind of music, so it
seems that it is something that is wired into us. Is there an
evolutionary advantage to liking music?

W. Daniel Hillis is Chairman and Chief Technology Officer of Applied
Minds, Inc., a research and development company and author of The
Pattern on the Stone.
________________________________________________________________

"Why do we decorate?"
Why do all the human cultures that we know of decorate things? Why not
just leave them alone? Why put in all that extra, and apparently
non-functional, energy?

Brian Eno, an artist, makes and produces records. He has produced U2
("including this year's award- winning "All That You Can't Leave
Behind"), Talking Heads and Devo and collaborated with David Bowie,
John Cale, and Laurie Anderson.
________________________________________________________________

"Will unification ever come to a stop?"
Unification of opposites is an underlying theme in the development of
humanity. Newton showed us that the same laws govern the motion of
heavenly bodies and apples falling on Earth. Darwin unified the
concept of being a human with that of being another living organism.
There have been numerous other unifications in the history of mankind.
So, how will it go on? Which notions appearing to us as very distinct
today will turn out to be the same for future generations? Will there
ever be a limit to unification? Will we in the end be able to show
that everything just stems from one single fundamental idea? Or two?
Or many? Or infinitely many?
A more practical and immedite question is where the next step will
lead us to? Which is the next unification of seemingly opposite and
distinct concepts. Maybe we should look at the real big questions loom
today and take them as hints for the next unifications. So, very
specifically, which of the questions raised in the Edge World Question
drive points towards the next unification?

Anton Zeilinger is a Professor of Physics at the University of Vienna
whose work in quantum teleportation has received world-wide attention.
________________________________________________________________

"How do women's minds work?"
Try this question on any man: All you'll get for an answer is a
shrugging of shoulders along with a puzzled facial expression. The one
thing neither rocket scientists nor astrophysicists will ever be able
to comprehend is how women think and feel. Bill Watterson's eternal
six-year old Calvin (from "Calvin & Hobbes"), no smart scholar, but
the epitome of the self-assured yet forever puzzled boy, summarizes
his incomprehension of the opposite gender: "What is it like to be a
girl? Is it like being a bug? I imagine bugs and girls have a dim
perception that nature has played a cruel trick on them, but they lack
the intelligence to really comprehend the magnitude of it!"

In reality it is, of course, the other way around. Nature has played a
cruel trick on men - rather than on women. Men's minds, for the most
part, work along a single longitudinal path: A triggers B, B triggers
C and so forth. They consider themselves to be smart, because they are
barely able to grasp causal chains. Men's intelligence is expressed by
the extent to which they can estimate or predict a sequence of steps
in a chain reaction. Like chess players, some men can think one or two
steps ahead, some seven or eight. Alternatives to their
one-dimensional, allegedly "logical" path of thinking are beyond their
imagination.

Womens minds, on the other hand, are much more complex. Women embrace
several different natures in their personality. In addition to the
men's straightforward "logical" way of thinking, they (according to C.
G. Jung) incorporate a personification of the unconscious
counter-sexual image, in other words the inner man in a woman. This
archetype encompasses a number of instincts that are quite useful in
supplementing a woman's emotions. In addition, women's minds embrace a
third governing force, the so-called "shadow", a counter-image of
their true character. The working-type woman, for instance, can
identify with the feelings of a spoiled tootsie. A woman who has run
expeditions in Ethiopia, Somalia and Afghanistan all her life, can
suddenly become flustered at the run of a nylon stocking. What makes
women so unfathomable to men is that they can leap in a split second
from one level of their personality to the other. As a consequence,
that charming lady you are flirting with suddenly turns into a
sharp-tongued businesswoman, only to react like a helpless college
girl in the next moment. It would be asking too much of a man's mind,
being merely a simplified, incomplete version of a woman's mind, to be
able to comprehend this kind of complexity in the opposite gender.

Of course, one might argue that men also incorporate an anima and a
shadow in their personality. So what? The effect of all three
personalities is still the same: A unilateral drive towards ambition,
competition and ultimately triumph. Let's face it: We men are
pathetically simple minded. How simple minded? Swiss author Melina
Moser knows the answer. She lists the only three things men need to be
happy: Admiration, oral sex and freshly pressed orange juice.

"What will happen when the increasing speed of communication, the
driving force behind cultural progress since the introduction of
husbandry, suddenly becomes irrelevant?"

I am convinced that there is a predominant driving force behind
cultural progress and that this driving force is speed of
communications. The ancestors of modern humans lived in caves and
hunted large mammals on essentially the same cultural level for over
two million years. The entire history of civilization is limited only
to the past 10,000 years.

In my opinion it began when, at the end of the Ice Age, sea level
rose, thereby drowning estuaries and creating innumerable natural
harbours. A high sea level invited people to climb aboard boats and
cross the sea, thus accelerating the exchange of information between
different peoples. Knowledge about new discoveries and achievements
spread more rapidly and the advance of culture received its first
major boost.

Since then, the acceleration of information exchange has driven
cultural progress. The wheel, sailing ships, trains, planes,
telephones, fax machines followed suit. Finally, the invention of the
Worldwide Web caused one of the biggest hysterias in world economics.
Today, we can transfer five thousand copies of the entire
Encyclopaedia Britannica from (almost) any place on earth to (almost)
any other place on earth in only one second  and at the maximum
possible speed, the speed of light.

After ten thousand years of cultural progress mankind is now reaching
the point at which any amount of information can be transferred to any
place at the speed of light. The increasing speed of communication,
the driving force behind cultural progress since the introduction of
husbandry, suddenly becomes irrelevant.

What will happen to progress as this threshold is crossed?

Eberhard Zangger is the geoarchaeologist who uncovered the most
plausible explanation for the legend of lost Atlantis of the past 2500
years and author of The Future of the Past.
________________________________________________________________

"Is the PC desktop really dead?"

Much ado has been made lately over the problems of the PC "desktop
metaphor," the system of folders and icons included in Macintosh and
Windows PCs. Critics of the desktop rightly point out that today's PC
users encounter much more information than in the 1980s, when the
desktop was first introduced. While I understand these criticisms, I
question whether the desktop is really dead < in other words, whether
the solution really lies in building a better desktop. Instead, I
think that the real issue is the increased information, not the
interface between it and the user.

Some technologists are ready to discard the old desktop. Last month
MIT's Technology Review ran a piece on new software attempting to
bypass the desktop metaphor. None of the tools are very convincing.
Scopeware, a software package from Mirror Worlds Technologies (founded
by David Gelernter, an Edge contributor), essentially removes all file
hierarchy by showing files sorted by creation date. While the tool has
some nice search features, it's unclear how removing all file
hierarchy is an improvement over today's desktop. Other technologies
in the article include a two-dimensional graphical "map" of the file
system and a 3-D navigable space. These programs try to solve the
problem of a cluttered desktop by presenting a new metaphor that could
become just as cluttered.

To be sure, there are advances to be made in the tools. Using
Microsoft Windows, even briefly, reveals so many interface flaws that
it makes me cringe. But fixing these myriad flaws will not address the
central issue, which is the tsunami of information arriving into
users' PCs. It is the user, not the tool, that should be the focus.

The Wall Street Journal recently interviewed several Americans to
inquire about their personal strategies for dealing with their e-mail.
Receiving 50 to 150 incoming messages per day, these PC users
described the methods they use to stay on top of their information and
remain effective in their jobs.

What's interesting about this article is that the Journal recognized
e-mail use as a personal activity. Many other business activities,
like using approved software or submitting timesheets, may be closely
regulated by the IT department < but not e-mail. Each user in the
article has become conscious of his or her information flow and has
created a system to manage it, using the software (albeit flawed) at
his or her disposal. The story is about personal needs first, tools
second. The industry's response to this problem should be the same. If
we could just teach more users to use their tools better, we'd be in
far better shape than if we simply churned out yet more complex
software.

I would be happy to be proven wrong. Gelernter's Scopeware, for
example, could turn out to be a revolutionary advance in curing
information anxiety. My guess, however, is that even the best tools
will fall short of a cure. We may need a combined strategy of better
tools and greater education of users about the nature of a world awash
in information. To be effective in coming years, users must assume
greater responsibility for their own information management.

Of course, there are problems with that proposition. For one, new
desktop metaphors, like the 3-D software, is sexy and makes for
interesting press clips. Educating users is decidedly dull. What's
worse, there is no easy business plan to educate users en masse in
more efficient ways to organize their information. Making a tool that
promises to help is so much more profitable. But tools alone won't
save us. If all we can do for users is give them a newer, flashier,
more distracting interface, then the desktop may indeed be dead
forever.

Mark Hurst is the founder of Creative Good, Inc., a leading user
experience consulting firm.
________________________________________________________________

" Does life on Earth have a future?"
By "life on Earth" I mean the variety of life, the multitude of
species, the dazzling array of ecosystems they create from the
permanent snow fields of the Himalayas to steamy jungles, and coral
reefs, and the variety of including ourselves including and the 6000+
languages we speak and our cultures that they largely define.
There are two answers: no and yes.
A median estimate is that a third of all the species will be on the
fast track to extinction within the next quarter century. Over 90% of
all languages will be gone by then, because languages spoken by fewer
than a million people are rarely taught to children. Most tropical
forests will be gone by 2025 and with them, their species and peoples.
Global warming will ensure that the species that survive do so in the
wrong place. Coral reefs will be cooked alive in too-warm oceans,
tropical glaciers will long have been only a memory preserved in the
National Geographic photo collection.
So what will it mean for humanity to live in such a biological
impoverished world? I always think of Orange County, California, with
an airport named after an actor. A fake cowboy/war hero (delete as
appropriate) to introduce you to a desert world with nitrogen-enriched
green lawns, no sidewalks, golf courses, imported water. Instant
gratification reigns. The future? Don't worry be happy. Enough people
like that world; property values are high.
But suppose we saved the variety of life on Earth, grabbed the nettle
of global warming, and, in general thought about our human futures.
What would that tell us about ourselves -- and what we are capable of
achieving? What would it take to accomplish that?
Answer the life-on-Earth question and whatever answer one picks, so
much about ourselves must be revealed.
Stuart Pimm is Professor of Conservation Biology at Columbia
University in New Yorkand author of The World According to Pimm: A
Scientist Audits the Earth.
________________________________________________________________

"Is it possible to know what is good and what is evil?"
For the past four centuries, the attempt to answer this question has
been the main driving force of world history  not only the history of
ideas, but also the history of politics and collective violence. This
is true for two reasons:

1. It is impossible for people to live without constructing some
cognitive structure (which philosophers call practical reason) that
asks and answers questions concerning how to live and what to do
traditionally, by formulating them in moral or ethical terms as how
we should live and what we ought to do.

2. When humanity made the transition, at the time of the scientific
revolution of the 17th century, to a new and higher stage of its
collective cognitive development by progressing from theology and
philosophy to science, it became more and more difficult for people
to see how it could be possible to answer the old pre scientific
theological and philosophical questions, "what is good and what is
evil?" Those questions came to be seen as unanswerable and hence
meaningless because what the scientific revolution showed, above
all, was that what we call "knowledge" (scientia) is possible when,
and only when, it can be framed in the form of hypotheses that can
be confirmed or disconfirmed by means of experience, i.e.,
empirical data and observations.

That entailed, for example, the conclusion that metaphysical knowledge
(knowledge of Absolute Reality, or God, as It, He or She exists
independently of our perceptual and conceptual apparatus) is
unattainable. (Nietzsche called this the "death of God.") But that was
not an insuperable problem, because metaphysics was immediately
replaced by physics, which had far greater cognitive power to predict,
explain and control the phenomena being cognized anyway.

What has been an insuperable problem, up to now, has been the
unavailability of any cognitively adequate replacement for ethics.
Moral knowledge is unattainable because there is, in principle and by
definition, no conceivable moral hypothesis that could possibly be
proved or disproved by means of any conceivable type of empirical
data, test or experiment. That is true, among other reasons, because
moral statements do not take the form of empirically testable
hypotheses, or hypothetical imperatives ("If you want X, then you can
get it by doing Y" - but with no guidance as to whether you should
want X in the first place). Moral statements take the form of value
judgments and categorical imperatives (i.e., commandments or orders as
to what you should do or want). Commandments can never be true or
false, so they cannot communicate knowledge. And value judgments are
incapable of communicating knowledge about the external world; the
only thing they can express are subjective wishes, tastes and
preferences which are, from a logical and epistemological point of
view, completely non-rational and arbitrary, matters of whim, about
which we can only say De gustibus non disputandum est.

Of course, it has always been known that beauty exists in the eye of
the beholder. What had not been seen so clearly, until the scientific
revolution, was that the same was true of good and evil. The first
modern personality, Hamlet, expressed this clearly in 1601 when he
said "There is nothing either good or bad but thinking makes it so."
I.e., good and evil are words for subjective preferences, sentiments
of approval or disapproval, that exist only in the mind of the
beholder. They do not exist as objective realities whose validity can
be known or tested, proved or disproved. And Hamlet's fate shows how
confused, paralyzed, violent and self-destructive people can become
when they have recognized that it is impossible to know what one
"should" do, but have not yet discovered how to replace that question
with one that is answerable.

Thus, it is not only God (and the Devil) that are dead; more
importantly, so are Good and Evil, the abstract philosophical concepts
of which the former are the concrete mythological and theological
incarnations. As Ivan Karamazov put it (speaking for those for whom
God is the only credible and legitimate source of moral authority),
"without God anything is possible, everything is permitted." But even
those who, following Kant or Rawls, would like to place their faith in
pure (a priori) reason, and would trust it to take the place of God as
the source of moral knowledge, are doomed to disappointment and
ignorance; for even Kant made it clear that moral knowledge was
unattainable. As he put it, "I must destroy knowledge in order to make
room for faith (Glaube, also translatable as "belief")." That is, even
the most dedicated champion of pure (a priori) practical reason as the
source of moral knowledge had to admit that moral knowledge is
unattainable; all he could put in its place was faith. And by the time
he wrote those words, the Age of Faith had long since been dead and
buried. Indeed, the whole history of modern science was one long
demonstration that knowledge was attainable when, and only when, one
replaced faith with its opposite, the attitude of universal doubt, and
refused to believe any proposition that had not been tested against
empirical evidence.

One inescapable consequence that followed from all this was the loss
of credibility of the traditional sources of moral authority (God and
pure reason). Why did that create such a crisis that most of human
history since the 17th century has been a series of attempts to come
to terms with it, both in theory and in practice? Because human nature
abhors a cognitive vacuum, especially in the sphere of practical
reason. For without some way of answering the questions that practical
reason asks, concerning how to live and what to do, humans are totally
disoriented and without direction, a condition that is intolerable and
panic-inducing. Once they have discovered the cognitive inadequacies
of the moral way of formulating those questions and answers, as they
have to an increasing extent since the scientific revolution of the
17th century, and have not yet discovered how to progress to a more
cognitively adequate form of practical reason, many people will
regress to a more intellectually primitive and politically reactionary
set of questions and answers. In the 20th century these took the form
of political totalitarianism, which led to genocide; more recently,
they have taken the form of religious fundamentalism, which has
increasingly led to apocalyptic terrorism. Given the existence of
weapons of mass destruction, it hardly needs to be stressed how much
both of these ideologies potentially threaten the survival of our
species.

These political/ideological movements have been widely, and correctly,
interpreted as rebellions or reactions against modernity (whether
modernity is conceived of as Western civilization, Jewish science,
modern technology, religious unbelief, freedom to express any opinion,
or whatever), though usually without specifying what it is about
modernity that threatens our very existence and survival. The deepest
threat, I would maintain, is cognitive chaos in the realm of practical
reason, and thus nihilism in the realm of morality, anomie in the
realm of law, and anarchy in the realm of politics. The paradox is
that the political movements that have been most widely interpreted as
nihilistic and "evil" - Nazi, Stalinist and theocratic totalitarianism
and their sequelae, genocide and terrorism  in fact originated as
desperate (and misguided) attempts to ward off nihilism and what their
adherents consider "evil." To them, the greatest evil is modernity, on
in other words, the modern scientific mentality, which replaces
certainty with doubt, dogmatism with skepticism, authority with
evidence, faith with agnosticism, coercion with persuasion, violence
with words and ideas, and hierarchy with democracy and equality of
opportunity  all of which fills them with overwhelming dread and
terror, amounting to a kind of existential or moral panic.

In fact, to the totalitarian/fundamentalist mind, modernity not only
represents absolute evil; it represents something even worse than
that, namely, the total absence and delegitimation of any standards of
good and evil whatsoever  the total death of good and evil, a state of
complete anomie and nihilism. For without knowing what is good and
evil, how can one know what to do? And without knowing what to do, how
can one live (not only biologically, but even mentally)? How can one
maintain any mental, emotional, social, cultural or political
coherence and order? As Kenneth Tynan remarked, "Hell is not the place
of evil; rather, Hell is the absence of any standards at all." That
condition is so intolerable to humans that many will regress to even
the most irrational and destructive ideology if they cannot find some
more epistemologically powerful cognitive structure with which to
replace the old moral way of thinking, once its cognitive inadequacy
has been so deeply perceived that its credibility has been
irreversibly destroyed.

Cognitive growth occurs by finding better and better answers to
existing questions. Cognitive development occurs only when one begins
to ask a new and different set of questions. We do this only when we
notice that our current questions are meaningless because they are
unanswerable, so that they need to be replaced with a different set of
questions that can be answered. By this point, in the 21st century, we
now realize that it is impossible to answer the moral (and legal and
political) questions, "How should we live and what ought we to do?"
The only questions that are meaningful, in that they can lead to
answers that possess cognitive content or knowledge, are the questions
"How can we live? i.e., what biological, psychological and social
forces, processes and behavior patterns promote, protect and preserve
life, and which ones cause death?" For that question can be answered,
by means of empirical investigation as to the causes and prevention of
the extinction of species (including our own, as by nuclear holocaust
or unrestrained devastation of our natural environment), the
extermination of social groups (through epidemics of collective
violence, such as war, genocide, poverty, famine, etc.), and the
deaths of individuals (by means of homicide, suicide, obesity,
alcoholism, etc.). In other words, the only possible replacement for
ethics or morality that is progressive rather than regressive is the
human sciences  human biology, psychology and psychiatry, and the
social sciences.

Unfortunately, the modern human sciences, unlike the natural sciences,
had not yet been invented when the scientific revolution of the 17th
century first showed that moral knowledge was unattainable. And even
today, the ability of the human sciences to predict, explain and
control the objects of their scrutiny (human behavior) is extremely
limited, whether compared with that which the natural sciences possess
with respect to their objects of study, or with the degree of
cognitive power that the human sciences will need to attain if we are
to gain the ability to avert the headlong rush to species-wide
self-destruction that we currently seem to be embarked upon. In other
words, to paraphrase Winston Churchill's remark about democracy, the
human sciences are the worst (the least cognitively adequate) of all
possible forms of practical reason  except for all the others (such as
moralism, fundamentalism and totalitarianism)! What that implies is
that nothing is more important for the continued survival of the human
species than a stupendously increased effort to make progress in the
further development of the human sciences, so as to increase our
understanding of the causes of the whole range of our own behaviors,
from life-threatening (violent) to life-enhancing.

James Gilligan has been on the faculty of the Department of Psychiatry
at the Harvard Medical School since 1966. He is the author of
Violence: Reflections on a National Epidemic.
________________________________________________________________

"Are space and time fundamental concepts or are they approximations to
other, more subtle, ideas that still await our discovery?"
It is hard to conceive of a universe that does not exist in space and
persist through time: space and time seem to be the basic framework of
the cosmos. Yet what is space and what is time? Are they "things" or
are they merely the language we use for organizing events we witness
in the world? Moreover, are they even fundamental? Could it be that
space and time conveniently summarize more basic ideas somewhat as
temperature summarizes the motion of atomic constituents? Will we one
day discover "atoms" of space and time---true, fundamental elements
which space and time as we now know them are simply coarse
approximations?

Brian Greene is a professor of physics and of mathematics at Columbia
University and author of The Elegant Universe: Superstrings, Hidden
Dimensions, and the Quest for an Ultimate Theory.
________________________________________________________________

"Are we ever going to be humble enough to assume that we are mere
animals, like crabs, penguins, and chimpanzees, and not the chosen
protégés of this or that God?"
Recent events around the world remind us of historical phenomena
observed since the dawn of civilizations: wars, genocides, oppression,
conquests, occupations, and, of course, killings in the name of some
God. Although the underlying principles are the same, modern killings
are more sophisticated, spectacular, and effective than those in the
past. In a matter of hours, you can now hijack a plane and crush it
against an office building killing thousands, or you can (as it was
done more than 50 years ago) drop an atomic bomb over a city killing
hundreds of thousands of civilians. You can see all that on TV.
Studying non-human animals, contemporary biology, evolutionary theory,
and modern ethology have gathered enough knowledge to respond to
questions regarding the nature of aggression, social power, alliance
formation, hierarchical domination, and attack-defense behavior.
Psychology, anthropology and cognitive science have added important
pieces extending this knowledge into human animals. From these studies
contemporary science has got some deep understanding about what peace
is about. In a nutshell, the moral is that there is no absolute, ideal
or ultimate peace in the animal kingdom. Peace turns out to be a
fragile local phenomenon that depends on circumstances, population
density, biological needs, availability of resources, and so on. If
you value peace, the best you can do is to provide conditions for
peace, not to "install" peace itself.
So, if we have good scientific knowledge about the nature of peace,
how come we don't have peace on earth? Well, because we don't want to
accept that we are animals. We prefer to continue believing that we
are the protégées of our own created Gods, and that we are, in a
transcendental sense, different from a chimpanzee. Peace for humans is
taken to be something profound, spiritual and pure, not a bio-social
emerging phenomenon. Our created Gods provide the moral values that
define what the absolute and ultimate peace is supposed to be, and who
is supposed to impose it. There is no surprise then if we see
intransigent world and religious leaders calling for holy wars,
fighting the Evil in the name of the Good, and justifying in the name
of peace, the bombing of civilians, the construction of missile
shields, or the occupation of foreign territories.
If we really value peace (but I am not sure this is what some world
powers really want!), what we need is to provide sustainable
conditions for peace. And for this, it would be much easier to know
how to do it, if we assume once for all, that we are indeed animals.
Rafael Núñez is professor of Cognitive Science at the University of
California at San Diego, and author of Where Mathematics Comes From
(with George Lakoff).
________________________________________________________________

"What is value?"
Oscar Wilde once said that "A fool is someone who knows the price of
everything and the value of nothing". Economists have struggled with
this question for several centuries and have largely given up - most
modern economists tacitly assert that price and value are the same
thing, except for possible "externalities" that prevent the market
system from functioning correctly. But many of us still believe that
the value of a good poem or a comforting word may not be fully
reflected in its price, and that value to society and GDP are only
weakly correlated.
The question behind this question is whether there is an objective
basis for saying that one thing is more valuable than another. In the
world of esthetics is inevitably subjective. But perhaps this is not
as manifest in other domains. For example, in engineering is it
possible to say that one design is inherently better than another?
This is closely related to the long standing and much debated question
of evolutionary progress. Is there a sense in which we can clearly say
that organisms tend to evolve toward better designs, when taken over
sufficiently long domains in time and space? When we compare the
non-living world of four billion years ago to the rich biosphere of
the present, the comparison seems obvious to some of us. But this is
hotly contested by others, who point to the lack of a objective
criteria for quality of design.
I think that, with functionality as the arbitrator, a mathematical
framework for distinguishing good and bad designs may be an achievable
goal. This has scientific importants for engineering and economics,
and profound implications for philosophy, relgion, and even politics.
In the postmodern world objectivity is out of fashion. Perhaps it is
time for reality to make a comeback.
J. Doyne Farmer , one of the pioneers of what has come to be called
chaos theory, is McKinsey Professor, Sante Fe, Institute, and the
co-founder and former co-president of Prediction Company in Santa Fe,
New Mexico.
________________________________________________________________

Who am I? What am I?
Perhaps I am this stuff here, i.e., the ordered and chaotic collection
of molecules that comprise my body and brain.

But there's a problem. The specific set of particles that comprise my
body and brain are completely different from the atoms and molecules
than comprised me only a short while (on the order of weeks) ago. We
know that most of our cells are turned over in a matter of weeks. Even
those that persist longer (e.g., neurons) nonetheless change their
component molecules in a matter of weeks.

So I am a completely different set of stuff than I was a month ago.
All that persists is the pattern of organization of that stuff. The
pattern changes also, but slowly and in a continuum from my past self.
From this perspective I am rather like the pattern that water makes in
a stream as it rushes past the rocks in its path. The actual molecules
(of water) change every millisecond, but the pattern persists for
hours or even years.

So, perhaps we should say I am a pattern of matter and energy that
persists in time.

But there is a problem here as well. We will ultimately be able to
scan and copy this pattern in a at least sufficient detail to
replicate my body and brain to a sufficiently high degree of accuracy
such that the copy is indistinguishable from the original (i.e., the
copy could pass a "Ray Kurzweil" Turing test). I won't repeat all the
arguments for this here, but I describe this scenario in a number of
documents including the essay "The Law of Accelerating Returns".)

The copy, therefore, will share my pattern. One might counter that we
may not get every detail correct. But if that is true, then such an
attempt would not constitute a proper copy. As time goes on, our
ability to create a neural and body copy will increase in resolution
and accuracy at the same exponential pace that pertains to all
information-based technologies. We ultimately will be able to capture
and recreate my pattern of salient neural and physical details to any
desired degree of accuracy.

Although the copy shares my pattern, it would be hard to say that the
copy is me because I would (or could) still be here. You could even
scan and copy me while I was sleeping. If you come to me in the
morning and say, "Good news, Ray, we've successfully reinstantiated
you into a more durable substrate, so we won't be needing your old
body and brain anymore," I may beg to differ.

If you do the thought experiment, it's clear that the copy may look
and act just like me, but it's nonetheless not me because I may not
even know that he was created. Although he would have all my memories
and recall having been me, from the point in time of his creation, Ray
2 would have his own unique experiences and his reality would begin to
diverge from mine.

Now let's pursue this train of thought a bit further and you will see
where the dilemma comes in. If we copy me, and then destroy the
original, then that's the end of me because as we concluded above the
copy is not me. Since the copy will do a convincing job of
impersonating me, no one may know the difference, but it's nonetheless
the end of me. However, this scenario is entirely equivalent to one in
which I am replaced gradually. In the case of gradual replacement,
there is no simultaneous old me and new me, but at the end of the
gradual replacement process, you have the equivalent of the new me,
and no old me. So gradual replacement also means the end of me.

However, as I pointed out at the beginning of this question, it is the
case that I am in fact being continually replaced. And, by the way,
it's not so gradual, but a rather rapid process. As we concluded, all
that persists is my pattern. But the thought experiment above shows
that gradual replacement means the end of me even if my pattern is
preserved. So am I constantly being replaced by someone else who just
seems a like lot me a few moments earlier?

So, again, who am I? It's the ultimate ontological question. We often
refer to this question as the issue of consciousness. I have
consciously (no pun intended) phrased the issue entirely in the first
person because that is the nature of the issue. It is not a third
person question. So my question is not "Who is John Brockman?"
although John may ask this question himself.

When people speak of consciousness, they often slip into issues of
behavioral and neurological correlates of consciousness (e.g., whether
or not an entity can be self-reflective), but these are third person
(i.e., objective) issues, and do not represent what David Chalmers
calls the "hard question" of consciousness.

The question of whether or not an entity is conscious is only apparent
to himself. The difference between neurological correlates of
consciousness (e.g., intelligent behavior) and the ontological reality
of consciousness is the difference between objective (i.e., third
person) and subjective (i.e., first person) reality. For this reason,
we are unable to propose an objective consciousness detector that does
not have philosophical assumptions built into it.

I do say that we (humans) will come to accept that nonbiological
entities are conscious because ultimately they will have all the
subtle cues that humans currently possess that we associate with
emotional and other subjective experiences. But that's a political and
psychological prediction, not an observation that we will be able to
scientifically verify. We do assume that other humans are conscious,
but this is an assumption, and not something we can objectively
demonstrate.

I will acknowledge that John Brockman did seem conscious to me when he
interviewed me, but I should not be too quick to accept this
impression. Perhaps I am really living in a simulation, and John was
part of the simulation. Or, perhaps it's only my memories that exist,
and the actual experience never took place. Or maybe I am only now
experiencing the sensation of recalling apparent memories of having
met John, but neither the experience nor the memories really exist.
Well, you see the problem.

Ray Kurzweil was the principal developer of the first omni-font
optical character recognition, the first print-to-speech reading
machine for the blind, the first CCD flat-bed scanner, among other
major inventions, and author of The Age of Spiritual Machines.
________________________________________________________________

"Why is life so full of suffering?"
It is a bit embarrassing to admit a preoccupation with this gigantic
old question, but it is human, I suppose. Tackling it straight on
seems to be an exercise in hubris, but if you stick to science, you
soon realize that we are still struggling to figure out what the
question is. It helps, I think, to distinguish four separate
questions.
The first question is why capacities for suffering exist at all. Why
do organisms care if they are injured? Why do they try so hard to
avoid dying? Why do they fight just to have sex? Why do we experience
a certain kind of pain just from being ignored? Such motives,
behaviors, and experiences are made possible by brain mechanisms
shaped by natural selection. While many individual experiences of
suffering arise because something has gone wrong, either in person's
life or brain, the capacities for suffering and pleasure exist because
they are useful, at least for the genes that make them possible. This
is terribly sobering. Many people still confuse the question of why
capacities for suffering exist, with the very different question of
what causes suffering in individual instances. I have called this the
"clinician's fallacy" because doctors and therapists so often treat
defenses as if they were diseases. Eventually the distinction will
become clear.
The second question is why we so often continue to do things that make
us miserable. Why do we pursue goals we can't reach given that this
causes so much unhappiness? Why we can't take Buddha's advice and
transcend our desires? The answer is that people who have given up
difficult goals have had fewer children. These goals are not just
wealth, power and sex. Trying without success to protect and help
one's children causes intense suffering and everyone recognizes why we
can't give up this goal. The evolutionary origins of our motives do
not make us helpless puppets but they can help us to understand why
controlling our desires is difficult.
The third question is why we treat others the way we do. It is silly
to say that people are innately generous or selfish, but the fact of
poverty is universal. I spent this week on call where the truth hits
you in the face; for all the riches of our society, millions of people
have no job, no money, few friends and not even a warm place to sleep.
Politicians enact policies that make it even easier for the rich to
keep their riches. This is nothing new, but neither is it unalterable.
Any improvement, however, needs to start from the realities of human
nature.
The fourth question is very different. The other three ask why people
are mostly the same, but this one instead asks why people are
different. The explanations for differences in suffering include
differences in genes, experiences, personalities, and social settings.
Most of our efforts to understand suffering have been here. This
research provides genuine knowledge, but only part of a complete
answer.
Big problems often motivate proposals for grand quick solutions that
give rise to horrendous unanticipated consequences. A gradual
deepening of our evolutionary understanding of ourselves offers more
modest but surer hope. Many misgivings about evolutionary approaches
to human behavior come from a simple misconception. Natural selection
explains how the competitive struggles of life shaped us, but this
does not mean that life is only a struggle nor does not mean that life
cannot be made better. Quite the contrary. If we want to prevent
social catastrophes and gradually improve our world, we had better
start with a real understanding of why we are the way we are. Negative
psychology tells us why some people are unhappy and how bad this is
for them. In another corner, positive psychology tells us why some
people are happier than others and how good this is for them. What we
need now is "diagonal psychology" that investigates the costs of
experiencing positive emotions when they are not warranted, and the
benefits of capacities for suffering. This will offer a real
foundation for understanding why the world is so full of suffering.
Randolph M. Nesse is Professor of Psychiatry and Professor of
Psychology at the University of Michigan and editor of Evolution and
the Capacity for Commitment.
________________________________________________________________

"How do we scale up the number of quality human relationships one
person can sustain by many orders of magnitude? In an increasingly
connected world, how does one person interact with a hundred thousand,
a million or even a billion people?"

Our one fixed resource is time -- human attention. As we become
increasingly networked in the technological sense, we also become more
networked in the social sense.

As our social networks scale up, we move more and more of our
interactions to the technological sphere. We can have many more
telephone interactions than we can have hand-written letter
interactions. When we move from telephone to e mail, the number of
interactions between people goes up even more dramatically.

Then we pair our e-mail interactions with a personal Web site, and we
start moving our personalities into the technology net, as a way of
automating and scaling up the number of relationships even further.

We end up with personal CRM systems to handle our increased
interaction load, and then add interfaces from our technology net to
our human forms. These interfaces will develop from current-day Palm
Pilots and Blackberry's to heads-up display style interfaces in
glasses and eventually retinal and neuronal interfaces.

"Hi Jerry, Ahh.., we met back in 1989, May 14th at 7pm, and since then
we've exchanged 187 e-mails and 39 phone calls. I hope your cousin's
daughter Gina had a wonderful graduation yesterday."

The whole range of interactions becomes organized. Introductions from
one person to another, and rating systems become automated.

Currently many people run into barriers as their personal networks
approach the range of thousands of people. Soon they will move to the
tens of thousands, to the millions and beyond.

With these trends, the friction costs of personal introductions go
down, and consequently the value of quality measurement and
gatekeeping go up dramatically. As the depth of knowledge in a
relationship increases, the threshold point at which you 'really know
someone' increases also. It's an arms race of intimacy.

Adrian Scott is founder of Ryze, a business networking community. He
is a founding investor in Napster, got his Ph.D. in nonlinear
optimization at age 20, and has sung with Placido Domingo and
performed with the NYC Ballet.
________________________________________________________________

"After postfeminism, what's next?"
Women of a previous generation said that their own mothers had missed
out on the fruits of feminism. Like many women in my cohort, I
discovered that my mother was born too early for postfeminism.
Of course, postfeminism makes sense only when basic legal and civil
rights exist for both sexes -- it's an irrelevant luxury for too many
women on this planet. Letitia Baldrige, the dean of American manners
(among other things), recently defined her own position as that of a
"conservative feminist." It makes sense, for the restless privileged
daughters of Western feminism, to become moderate postfeminists -- not
centrists, exactly, but realists.
Feminism is a seductive, useful and powerful ideology, provoking
reaction and rebellion whenever it becomes an established player. When
will postfeminism be a viable option the world over? Will it ever be
possible? And, in those cultures where postfeminism plays an important
role in women's lives, what's the next step? Is postfeminism a toy or
a tool?

Tracy Quan is a member of the International Network of Sex Work
Projects. She is the author of the novel, Diary of a Manhattan Call
Girl.
________________________________________________________________

"Do languages matter?"

A language dies when there is nobody left to speak it.

By the best estimates, around 6,000 languages are alive in the world
today. Half of them, perhaps more, will die in the next century --
that's 1,200 months from now. So this means that somewhere in the
world, a language dies about every two weeks.

Why do languages die? There are many reasons -- natural disasters (for
instance, if an entire village of speakers is killed in a flood, or
wiped out in a disease epidemic), social assimilation (speakers cease
using their native language and adopt a more popular language in
response to economic, cultural, or political pressures). Genocide,
colonization, and forced language extinction are causes.

The belief that language diversity is healthy and necessary is often
compared to biodiversity, and the idea that a wide array of living
species is essential to the planet's well-being.

Michael E. Krauss, of the University of Alaska's Alaska Native
Language Center, extends this analogy to define three stages of
language health in The World's Languages in Crisis:
moribund: "languages no longer being learned as mother-tongue by
children"

endangered: "languages which, though now still being learned by
children, will -- if the present conditions continue -- cease to be
learned by children during the coming century," and

safe: languages with 'official state support and very large numbers of
speakers.'

If we measure the value of a language simply by the number of people
it allows us to communicate with, bigger would always be better, and
the death of an endangered language would be of no consequence to the
rest of the world. If 128 million people speak French, and roughly 100
people speak Pomo -- a nearly extinct indigenous language in
California -- then French is exponentially more valuable than Pomo.

But language is not math. Language is embodiment of cultural identity.
Language is nuance, context, place, history, ancestry. Language is an
animate being; it evolves, it adapts, it grows. Language is the
unique, neural fingerprint of a people. Language is a living code that
provides structure for human experience. Language is intellectual DNA.

Does diversity of thought and culture matter?

Does human diversity matter?

Then, language matters.

Xeni Jardin is a freelance journalist and conference manager.
________________________________________________________________

"Is our brain smart enough to understand the brain?"

Here is a paradox for cognitive neuroscientists: We're trying to
understand the brain with the very mental resources that are afforded
by our brains. We hope that the brain is simple enough that we can
understand it; but it needs to be complex enough for us to be able to
understand it.

This is not completely unrelated to Gödel's theorem, which states
-roughly- that in any sufficient complex formal system, there exists
truths that are inaccessible to formal demonstration. Strictly
speaking, Gödel's theorem does not apply to the brain because the
brain is not a formal system of rules and symbols. Still, however, it
is a fact that the tightly constrained structure of our nervous system
constrains the thoughts that we are able to conceive. Our mathematics,
for instance, is founded on a small set of basic objects: a number
sense, an intuition of space, a simple symbol-manipulation system...
Will this small set of representations, crafted by evolution for a
very different purpose, suffice to understand ourselves?

I see at least two reasons for hope. First, we seem to have a
remarkable capacity for constructing new mental representations
through culture. Through metaphor, we are able to connect old
representations together in new ways, thus building new mathematical
objects that extend our brain's representational power (e.g. Cartesian
coordinates, a blend between number and space concepts). Second, and
conversely, Nature's bag of tricks doesn't seem so huge. Indeed, this
is perhaps the biggest unanswered question: how is it that with a few
simple mathematical objects, we are able to understand the outside
physical world in such detail? The mystery of this "unreasonable
efficacy of mathematics", as Wigner put it, suggests a remarkable
adaptation of our brain to the structure of the physical world. Will
this adaptation suffice for the brain to understand itself?

Stanislas Dehaene is a cognitive scientist at the Institut National de
la Santé and author of The Number Sense: How Mathematical Knowledge Is
Embedded In Our Brains.
________________________________________________________________

"Who and what are the we in we?"

Humans are, to our knowledge, the only species who can inquire into
the nature of nature. So it is not just narcissism that drives our
efforts to understand what makes humans different from other animals.
Often we are drawn to the great achievements of Homo sapiens in the
arts, science, mathematics, and technology, because we view these
achievements and the minds that created them as the paragon of what
makes us special. The assumption is that these minds got an extra dose
of the best of what makes humans human. But several lines of evidence
are now coming together to suggest something a bit different and, for
many people, more than a bit disturbing.

It is now well known that great achievers are disproportionately
likely to suffer from mental illnesses. Severe mental illnesses,
particularly bipolar disorder, are much more common among the greatest
novelists, poets, painters, and musicians, than among your everyday H.
sapiens, especially in recent centuries as the great accomplishments
have become more abstract, that is, less normal. A Freudian might
explain this association by suppressed social environment that
generated both the creativity and their illness. To geneticists,
consideration of familial associations suggests a genetic causes. What
flows from these perspectives is the dogma that has dominated most of
the past century: mental illness and mental creativity result
primarily from an interaction between stressful environments and
unusual human alleles.

A careful consideration of the evidence and application of natural
selection, however, implicate another cause: infectious agents. People
with schizophrenia and bipolar disorder, for example, are more likely
to be born in late winter or spring, when born in temperate latitudes.
This pattern is a smoking gun for prenatal or perinatal infectious
causation, which can also explain the known familial associations as
well as or better than human genetics. And human genetics does not
offer sensible explanations of other aspects of these disease, such as
the season-of-birth associations, the urban/rural associations or the
high fitness costs associated with the diseases. People with severe
mental illnesses commit suicide at a rate that is far too high to
allow the maintenance of causal alleles simply by the generation of
those alleles through mutation.

Noninfectious environmental influences may help explain some of these
associations, but so far as primary causation of severe mental
illnesses is concerned, none of the noninfectious environmental or
allelic candidates have stood up to the evidence to date as well as
infectious candidates. The arguments will eventually move toward
resolution through the discovery of the causal agents whether they be
alleles, pathogens or some noninfectious environmental influence.
Alleles have been claimed as major causes of these diseases but
retractions have followed claims as soon as adequate follow-up studies
have been conducted. In contrast, evidence for associations between
infectious agents and severe mental illnesses has mounted over the
past decade in spite of much less funding support.

The associations between mental illness and creativity make sense from
an evolutionary perspective. If our minds evolved to solve the
challenges associated with hunting/gathering societies, we can expect
the normal mind to be poorly equipped to solve some of the
accomplishments valued by modern society, whether they be a new style
of painting or complex mathematical proofs. If neuronal networks could
fire differently, then new mental processes could be generated. Most
of the re-networking that accompanies severe mental illnesses makes a
person less functional for the tasks valued by society. But every now
and then the reorganized brain generates something different,
something that we consider extremely valuable. To distinguish this
abnormality that we esteem from the abnormality that we pity, we use
the term genius. If the geniuses of today were mentally ill at a rate
no greater than that of the general population, then we could
reasonably assume that genius was simply one tail of the naturally
selected distribution of intellectual capacities.

The high rates of mental illness highest achievers, particularly in
the arts, however, demand a different explanation. If the illnesses
associated with such creativity are caused by infection and the
infection cannot be explained as a consequence of the creative
lifestyle, as indicated by the season of birth associations, then the
range of feasible explanations is narrowed. The least tortuous
conclusion is that prenatal infections damage the development of the
brain, generating a brain that functions differently from the
naturally selected brain. Most of the time these pathogens just muck
up the mind, causing mental illness without generating anything in
return. But in a few lucky throws of the dice, a different mind that
is brilliantly creative.

At this level of accomplishment it is looking more and more like the
we in we do not just belong to Homo sapiens but also to a variety of
parasitic species. It may include human herpes simplex virus, borna
disease virus, Toxoplasma gondii, and many more yet to be discovered
species that alter the functioning of our brains, usually for the
worse, but occasionally generating minds of unusual insight. Richard
Dawkins's concepts of the extended phenotype and meme return with
extended license. In addition to viewing characteristics of an
organism as an extension of a manipulator species for the benefit of
manipulator genes, some characteristics that humans prize as the best
of what makes humans human may be side effects that do not actually
benefit the manipulator. They are in effect cultural mutations
generated as side effects of biological parasitism. Like biological
mutations the cultural mutations are often detrimental, but sometimes
they may create something that humans value: A Starry Night, The
Raven, Nash equilibria, or perhaps even calculus. The devastation
associated with these characteristics, which often involves extreme
fitness loss -- suicide with damage rather than benefit to kin --
cannot be explained by natural selection acting solely upon humans.
The principles of natural selection emphasize that we have to consider
other species that live intimately within us as part of us, affecting
our neurons, shaping our minds.

Paul W. Ewald is a professor of biology at Amherst College and author
of Plague Time.
________________________________________________________________

"Will cognitive science change the way we think as much as other
sciences have?"
Physical science has changed how we think. Those with a basic
education no longer think of sun revolving around the earth, or of
matter as made up of earth, air, fire, and water. The germ theory of
disease is well known, as is DNA.

Cognitive science is newer and it is not yet well-known, even among
prominent scientists, and the corner of cognitive science I work in --
cognitive linguistics -- is even less well-known. Yet its results are
just as startling and it has just as much capacity for changing how we
think.

As I read through the questions posed by my distinguished colleagues
from other disciplines, I realized that the very questions they posed
look very different to me as a cognitive linguist than they would to
most very well educated Edge readers. It occurred to me that simply
commenting on their questions from the perspective of a cognitive
linguist would provide some idea of how the world might look different
to someone who is acutely aware of the finding of cognitive science,
especially cognitive linguistics.

With the greatest of respect for my colleagues who raised the
following questions, here is one cognitive scientist's perspective on
those questions, given the findings in my discipline.

Todd Feinberg asks: "What is the relationship between being alive and
having a mind?"
The mind is embodied, with particular concepts "computed" by highly
specialized neural circuitry that is part of the brain and connected
to the body, especially the sensory-motor system and the emotional
system. As Antonio Damasio has repeatedly observed, rationality is
impossible without emotions, that is, without the appropriate
activation of the brains emotional centers.

Here are some examples of the ways that conceptual thought depends on
the peculiarities of the body and brain: Spatial relations concepts
arise from structures in the visual system like topographic maps and
orientation-sensitive cells. The way we structure events appears to
arise from neural schemas for motor control and perception in the
prefrontal cortex. Abstract reasoning makes use of embodied reasoning
via metaphoric projections from the sensory motor system to higher
cortex. Our vast system of primary conceptual metaphors appears to
develop spontaneously during childhood because just about all children
have certain recurrent experiences in the world.

In short, without a body with a brain functioning in the world, there
are no concepts and there is no mind. Computers don't think. They
don't understand. They just compute.

David Myers: "Why do we fear the wrong things?"
Because we all have conceptual systems that make use of prototypes,
conceptual frames, and conceptual metaphors, which operate below the
level of consciousness. These are neither literal, nor even consistent
with each other. Yet we understand the world in terms of those
conceptual structures. It is inevitable that human beings (scientists
included) will tend to act in terms of their natural cognitive
structures, which they use automatically and unconsciously, rather
than in terms of scientific rationality, which requires both training
and conscious effort.

Timothy Taylor asks: "Is morality relative or absolute?"
Neither. Human moral systems are not absolute, but not unrestrictedly
relative either. There is a relatively small set (about two dozen) of
metaphors for morality found around the world, and these, together
with traditional models of the family, give rise to a limited range of
moral systems. The ethics of care (which I "nurturant morality" in
Moral Politics) is, so far as I have been able to determine, the
system best suited to human flourishing. In short, I agree with Taylor
about the ethics of care

David Deutsch asks : "How are moral assertions connected with the
world of facts?"
Certain moral systems are inconsistent with what is known from
cognitive science. For example, strict father morality, which requires
absolute moral strictures and an absolute moral authority, simply is
out of synch with how the mind works. This is a case where an 'ought'
can arise from an 'is.'

Douglas Rushkoff: "Are stories the only way we have of interpreting
our world -- meaning that the forging of a collective set of mutually
tolerant narratives is the only route to a global civilization?"

In a word, yes. Interestingly enough, the kinds of stories that
defined civilizations seem rather restricted in character, as do the
kinds of stories that define what a possible "history" is. It is, of
course, an open question as to whether a "global civilization" is
possible, or even desirable. Diversity is a crucial value.

Jordan Pollack asks: "Is there Progress?"

What constitutes "progress" depends on your conceptual system,
especially your moral system. I happen to agree with Pollack that the
Bush administration is morally regressive and that things have gotten
much worse in the past year, especially since September 11. But that
is because I (and probably Pollack) accept nurturant morality. If you
see the world in terms of strict father morality, as George W. Bush
does, then from the perspective, there has been "progress."

There is of course a difference between scientific progress and human
progress. As Bill Joy has observed, there is some scientific
"progress" that represents a huge backward step.

Lance Knobel asks: ""Do we want to live in one world, or two?"
John Markoff asks: "Can wealth be distributed?"

Cognitive science is important here, because of certain myths arising
from moral conceptual systems, conceptual framing, and economic
metaphors. Here's what those myths are and how they work:

o The Market Myth: The market is seen metaphorically as a force of
nature that works optimally and that is it "unnatural" and dangerous
to "tinker with." It follows that, if the market determines the value
of your labor, that is "natural," "fair," and a consequence of an
optimal system.

This metaphor is disastrously at odd with how markets actually work.
Markets are constructed; for example, it took more than 900 pages of
regulations to build and constrain the global market of the WTO. The
stock market is constructed and maintained by the SEC and other
institutions.

o Moral Self-Interest: Strict father morality has a moral version of
Adam Smith's invisible hand metaphor: If everyone pursues his own
well-being, then the well being of all will be maximized. This has the
corollary: Being a "go-gooder" (not pursuing your own self-interest)
screws up the system.

o The Bootstrap Myth: In American, everyone can pull himself up by his
bootstraps -- succeed if he works hard enough.

These myths work in concert in a disastrous way. In the U.S., about
one-quarter of the population (roughly, those without health care)
performs difficult and absolutely essential work that, because of the
structure of the economy, they cannot be paid fairly for -- caring for
children and the elderly, house cleaning, picking fruits and
vegetables, working in fast-food joints, doing day labor, and on and
on. Without them, our economy could not function. These workers make
possible the lifestyles of the upper three-quarters of the population.
Yet, for the most part, the economy is such that their employers
cannot afford to pay them a wage commensurate with their contribution
to the economy.

The result is what I call the "two-tier economy."

Though any one person might be able to pull himself up by his
bootstraps, one-quarter of the economy cannot. For this society to
run, some quarter of the population has to do work that cannot be paid
commensurate with its value to the economy.

This is actually a failure of the way our economy is set up. Since
lower-tier workers effectively work to keep the economy going, they
should be paid by the economy as a whole -- via the way markets are
commonly constructed and tweaked, via the tax code. Provide for a
negative income tax. The money is there in the economy.

Why doesn't this happen? Partly because of the greed and power of the
wealthy. But also because of the three myths given above. They hide
the nature of the problem and its solution.

Globally, of course, the situation is much worse. Our current
regulations constructing the global marketplace are unethical. An
ethical globalization -- one based on an ethics of care --is needed.

Karl Sabbagh asks: "Would an extra-terrestrial civilization develop
the same mathematics as ours? If not, how could theirs possibly be
different?"
When one looks at the details of what is needed biologically to form a
human conceptual system, it turns out to be an awful lot of very
special biological structure that evolved via a very, very long
sequence of biological accidents. So many that, despite the vastness
of the universe, the probability that another enormously long sequence
of biological accidents would produce anything like the intelligence
we know is virtually null. There are no extra-terrestrial
civilizations.

Mathematics especially shows a dependence on the details of human
bodies and brains, as Núñez and I show in Where Mathematics Comes
From. Number arises from very special neural circuitry. Advanced
mathematical ideas arise from a long series of interlocking conceptual
metaphors. The most important of these is what we refer to as the
Basic Metaphor of Infinity, which allows one to use finite experience
to metaphorically characterize the idea of actual infinity -- which
stands outside the experience of finite beings. A vast portion of
modern mathematics depends on this metaphor.

Margaret Wertheim: "How can we understand the fact that such complex
and precise mathematical relations inhere in nature?"
They don't inhere in nature.

Mathematics makes use of the same conceptual apparatus used by the
human mind generally, which allows for mathematical ideas -- ideas
grounded in our bodies and that mostly make use of metaphor.
Mathematical ideas, like other ideas, don't go floating around in the
air. Those ideas arise from human brains that evolved to run human
bodies and don't exist outside those brains.

The neural capacity to link ideas to symbols is central to
mathematics. Computation is made possible by neural mappings that link
mathematical ideas to their symbolizations, in such a way that
conceptual inferences can be mirrored by symbolic computations.

Scientists are astute observers of nature. They use their conceptual
systems to understand nature and to classify natural phenomena and to
reason about them. Science uses ideas like change, size, proportion,
inversion, and so on. Mathematics uses the same ideas, mapped
precisely onto symbolizations. Thus, there are physical phenomena that
change in inverse proportion to their size and there is a mathematics
that expresses the same ideas with accompanying computations. The
correlation between the mathematics and the world occurs in the mind
of the scientist, because scientists understand the world in terms of
ideas, and those very ideas either occur in the conceptual system of
existing mathematics or scientists make up a new mathematics to
mathematicize those ideas.

Paul Bloom: "How will people think about the soul?"

David Gelertner: "Why is religion so important to most Americans and
so trivial to most intellectuals?"

John Horgan: "Do we want the God machine?"

Religion has many aspects -- at least the following, which cognitive
science has something to say about:
o The Metaphorical Aspect: There are three basic classes of metaphors
for God that arise naturally.

First, the personification metaphors, centering on God as Parent,
typically a father. Eve Sweetser has observed that if you take the
properties of the father (progenitor, authority figure, powerful
person, protector, he loves you, etc.), you get the other commonplace
metaphors for God (creator, lawgiver, king or lord, shepherd, lover,
and so on).

Second, the same basic metaphor of infinity that underlies actual
infinity in mathematics characterizes God as infinite: all-knowing,
all-powerful, first cause, the highest good.

Third, the immanence metaphor: God is the world. (Do not say God is
not in the stone; God is in the stone!) Most traditions have immanent
versions (e.g., Kabalistic Judaism), and immanence seems central to
Buddhism.

o The Explanatory Aspect: Religions claim to answer fundamental
questions: Where did we come from? What is the future? Is there life
after death? Are we mortal or immortal? Do we have a soul? Religions
commonly have prophets, who offer such explanations. Explanations come
in the form of rich metaphorical narratives. The highest calling is to
know God, or seek to do so, according to whatever metaphor for God one
is using.

o The Moral Aspect: Religions are fundamentally moral. They tell you
how to live, what is good or bad. They often use the metaphor of Moral
Accounting in one way or another, with good and bad deeds added up and
balanced. This is often tied up with issues of either Karma (moral
accounting with the universe) or reward and punishment in an
afterlife. In addition, there are saints (figures who set examples for
us to follow), devils (evil-doers who examples for us not to follow),
and martyrs (who have suffered for the religion and thus gain extra
credit). Following a religion is not easy, and involves considerable
responsibility and discipline.

o The Experiential Aspect: Forms of spiritual experience, which we now
know are physical in character -- brain states. Religious experience
is also communal, and communities are vital to religion.

From cognitive science, we know that thought, perception, and even
personality are embodied in the brain: you can't think, see, or be who
you are without appropriate neural activity in the right parts of the
brain. Thus, if you had a disembodied soul that could live on after
death, it couldn't see (without a visual cortex), couldn't hear
(without an auditory cortex), couldn't feel (with none of the brain's
emotional centers), couldn't have empathy (with no mirror neurons),
wouldn't have a memory, and wouldn't have your personality (without
the right prefrontal cortex). In short, it wouldn't be much of
anything, certainly not much of you.

It is easy to debunk aspects of religion, like religious explanations
and notions of the soul, and in cases like creationism, it is
important to do so. But there are very good cognitive reasons that
people find meaning in religion -- and believe religions. Religions
fit common metaphors. Religions provide moral guidance for life that
makes sense because religions use common metaphors for morality --
morality as accounting (summing up good and bad), purity, uprightness
(heaven is up, hell is down), and so on. Religions provide spiritual
practice, which is a seen as a way to gain knowledge (of God), to
connect with the infinite (God), and if followed, can lead to
spiritual experience (a real physical experience), involving a sense
of the elimination of boundaries and of connectedness with others and
with the universe. Religions also provide a spiritual community, in
which one can connect with others dedicated to the same ideals.

"Do we want the God machine?" No. The point of religion is the
practice, the path, the moral life, and the connection with others and
the world in one's everyday life. The end point makes no sense and has
no point without traveling the path. The God machine will be ignored
by those for whom religion in all its aspects is important.

It is often been observed that science has many of the properties of
religion. Science seems to take the form of a religion based on the
immanence metaphor, with God as the universe and the highest calling
being to understand the universe (to know God). Many central questions
of science come from religion: Where did we come from? (The Big Bang)
What is the future -- will the universe keep expanding? Is the
universe finite or infinite? From this perspective, the drive for a
single unified Theory of Everything is metaphorically the drive to
know a single God.

Issues of immortality are central to science, with Reputation
metaphorically playing the role of the Soul in some respects. Seeking
knowledge is moral behavior, and making important discoveries is doing
Good. The reward can be immortality -- your reputation can live
forever. If you win a Nobel Prize, it is there forever, whether you
are or not; it makes you one of the immortals. There are saints --
Einstein, Darwin, Newton, etc. -- and saints' lives. There are even
relics (Einstein's brain) and reliquaries (Who got Einstein's
office?).

The explanations science offers are metaphorical. Conceptual metaphors
preserve inferences, which makes them useful for science. But
scientific differ from the metaphors of religion because they
presuppose empirical observation and science uses very special
metaphors that that not only preserve inference but that are
mathematicized, that is, that have a symbolic calculus attached, which
allows for calculations and predictions. Einstein's great metaphor in
general relativity was that time is a spatial dimension and that
gravity is curvature in space-time. The metaphor yields a beautiful,
predictive mathematics, but is no consolation when you fall and hurt
your knee and are told by a literal Einsteinian that no force acted on
it; rather, it moved along a geodesic in space-time.

A common metaphor in physics is What Exists Is What Can Be Observed,
which lies behind Lee Smolin's contribution. Then there are the
proposed new metaphors that come out in the questions:

o "Are the laws of nature a form of computer code that needs and uses
error correction."

o "Is information the basic building-block of the universe?"

These are, of course, serious proposals to use new metaphors and the
mathematics that goes with them to yield laws of nature that make
better predictions.

A reasonable answer to David Gelertner's question is that scientists
do have a religion, science itself. As an immanence religion, in which
God is the Universe, the Universe becomes sacred, understanding the
universe becomes a form of knowing God, scientific practice is
religious practice, scientific discipline is devotion, the "work
trance" of the scientist is a form of meditation, scientific discovery
is moral action, a good reputation is a reward for moral action, and
the immortality of Reputation is the Immortality of Soul.

Science also has its cult aspects. They peek through occasionally in
Edge discussions. Sometimes, when I read Edge, I feel like I'm
standing at the supermarket checkout counter reading the National
Enquirer's stories about sightings of extraterrestrials.

"Is God nothing more than a sufficiently advanced extraterrestrial
intelligence?"

It could be a National Enquirer headline.
It is remarkable how many scientists -- respected scientists, great
scientists, even Nobel winners -- really believe in
extra-terrestrials. Not only that, but they believe that there are
extra-terrestrial scientists and even extra-terrestrial mathematicians
with the same mathematics we have. To a cognitive scientist, it's
quite charming, if occasionally as frustrating as encountering
creationists.

One would like to think that the belief in extra-terrestrial
scientists and mathematicians is just a lack of education in cognitive
science. That's not a field that most physical scientists or
mathematicians are trained in or even read. They don't learn all the
amazing details that go into the embodiment of concepts -- concept by
concept. They don't learn about the staggering number of biological
accidents that had to happen for cells to develop, and then neurons,
and then neural "computation," and neural networks, and then all the
myriad of further accidents required to get specialized neural
structures to run bodies, and after that to develop concepts and
reasoning biologically. Most scientists don't learn the details and so
don't know that the probability of anything like this happening twice
is virtually zero, despite the billions of stars out there. But I'm
not so sure that mere education would help.

There are reasons why ordinary folks believe in the soul, and there
are similar reasons why so many scientists believe in
extra-terrestrials with mathematics just like human mathematics.
Believing in the soul does not just allow for comforting beliefs--
that you will someday be reunited with loved ones who have died and
that you will get your reward for being good in heaven. It also has a
basis in experience, oddly enough.

Consider the phenomenon of hearing yourself think. When you hear
another person, there is an external sensory input coming from the
other person. But, though your thought is not something you can
perceive, there are neural connections linking ideas with the brain
centers for sound production and perception. The acoustic cortex can
be activated not just by external stimuli, but also by brain-internal
neural connections. When you "hear yourself think" the neural
activation is coming from inside the brain, but the experience, in
part, is similar to hearing a stimulus originating outside the body.
It is as though "you" are hearing another person express thoughts --
another "person" inside you, separate from your body. It is that
experience that makes it sensible to think in terms of a "soul" inside
you, capable of thought, but separate from your body.

The popular belief in extraterrestrials also has natural cognitive
origins. It seems to have arisen from the idea of the exotic
foreigner. Ming the Magnificent in Flash Gordon movies was made up to
look Asian. The reasoning seems to be: If there are people from other
countries, who looked vaguely like Westerners, but with somewhat
different features, language, and culture, there could be such folks
from other planets. The variations on extraterrestrials in science
fiction films go from Spock to Klingons to featureless creatures that
commandeer our bodies. There is an overlap as well with other
otherworldly creatures -- angels and devils. A common folk theory is
that it is human emotions that make us human; so Spock, for example,
has no emotions -- nor do machine- and insect-like extra-terrestrials.

But the most common theme is that that extraterrestrials are
foreigners-- explorers, exiles, or conquerors-- basically like us,
with language, reason, mathematical and scientific abilities, good and
evil motives, as well as bilateral symmetry, heads, eyes, ears,
mouths, arms, legs, and so on.

What interests me, as a cognitive scientist, is the physical
scientists' version, and the arguments usually given.

o The Hubris Argument: The progress of science is a move away from
human beings being at the center of the universe, starting with
Copernicus. This is one more step. This is a scientific, and hence
anti-religious progression away from human beings being special, being
made uniquely in the image of God and being the unique inheritors of
the material world. The idea of extraterrestrials make us more modest
-- modesty is a moral trait -- and to even suggest that human beings
might be the only intelligent species in the universe is to show
enormous and inappropriate hubris.

o The Probability + Evolution Argument: First, the probability
argument: There are billions and billions of stars in the universe and
some small percentage of them have planets, and some small percentage
of the planets have the right chemical composition, atmosphere and
clime for life. Even if that percentage is small, the number of stars
is so large that the probability is high that the chemical and
climactic conditions for life exist else whether in the universe.

Then the Evolution argument: Once the chemicals and the climate and
atmosphere are right, then evolution takes over. Evolution is a
natural universal process in which more complex molecules are formed
from less complex molecules and a certain percentage are stable. The
process produces more and more complex molecules, until DNA-like
molecules capable of reproduction (and hence life) are produced and
start reproducing. Evolution then takes over. More complex life forms
are randomly produced and a certain percentage survive and reproduce
further. It is assumed that organisms with higher complexity will be
able to survive better than those with lower complexity, so that
evolution will naturally progress toward more complex organisms.
Eventually organisms with some intelligence will be randomly produced,
and since they will have an evolutionary advantage, they will survive.
The process will repeat, with more and more complex intelligent
organisms being produced, until they become intelligent like us, and
develop mathematics and science, which allow them to adapt optimally.

o The Math in the World Argument: It is assumed that the physical
universe works according to fixed laws stateable in mathematical
terms, and that the mathematics inheres in the material world
(logarithmic spirals in snails and nebulae, Fibonnacci series in
flowers, quadratic equations in home runs). It is further assumed that
these laws are the same everywhere in the universe. Thus intelligent
beings who survived via evolution to function in the world, must have
acquired the same mathematics.
These are standard arguments. The Hubris argument is not a scientific
argument at all and we will discount it. The Math in the World
argument is just false. It assumes that mathematics has no ideas, no
concepts, no symbolization linking ideas with symbols. Mathematics has
both. But ideas and symbolizations of them only exist in beings with
minds, and we have a pretty good idea, at least from the study of
human beings, what the peculiar nature of ideas is, and what kinds of
embodied neural structures are required to characterize those ideas.
Ideas like actual infinity, infinite sets, transfinite numbers, and so
on are not magically part of the physical world. It takes beings with
brains and bodies to have such ideas.

The Probability + Evolution Argument leaves out the probabilities for
the evolution of biological structures of the precise form capable of
"computing" just the right kinds of ideas to reason with in general,
as well as just the right ideas for the relevant mathematics and for
characterizing the symbolization of those ideas. Those biological
mechanisms and neural structures are so peculiar and complex that the
probability is effectively zero that the precise biological structures
for the right mathematical ideas will evolve ever again anywhere.
Those astronomically low probabilities are always left out of the
argument.

Well, the counterargument goes, it happened once. How to you know it
couldn't happen again. Obviously we don't. But that's not the point. A
scientific argument must be positive. It must be an argument from
knowledge, not from lack of knowledge. No serious scientific argument
has ever been given that takes the relevant cognitive science into
account.

In short, the physical scientists who believe in extraterrestrial
intelligence are arguing in a way they would never get away with
arguing in their serious scientific fields. Why?

The answer is that their belief in extra-terrestrial mathematicians
who have our mathematics fits an important myth, an identity-defining
myth that Núñez and I, in Where Mathematics Comes From, called The
Romance of Mathematics. The Romance goes like this:

Mathematical entities and relations really exist. They structure this
universe and any possible universe. The physical universe works
according to mathematical laws that inhere in the universe itself,
independent of any beings. Correct reason is a form of mathematical
logic, which is a form of mathematics. Since the universe is
structured rationally, mathematical logic inheres in it too. Human
mathematics is part of the abstract, transcendent mathematics. A
mathematic proof is a discovery of a universal truth. It thus takes
one beyond the merely human and puts one in touch with transcendent
truth. To learn mathematics is thus to learn the language of nature, a
mode of thought that would have to be shared by any highly intelligent
beings anywhere in the universe. Because mathematics is disembodied
and rational thought is a form of mathematical logic, intelligent
thought can exist outside of living beings. Thus, machines can in
principle think.

Every part of this Romance is false. It contradicts what we know from
cognitive science and neuroscience. But it serves an important role in
the "religion" of many mathematicians and physical scientists. Indeed,
it is one of the defining narratives of that religion.

If God is taken in the immanent sense as being the universe, the
Romance says that those, and only those, who know mathematics can
understand the universe and thus, metaphorically, "know God." They are
"seers" who can see what ordinary folks cannot. Mathematics, according
to the Romance, takes you beyond yourself, to the realm of the
transcendent. Science and mathematics are therefore sacred activities,
and scientists and mathematicians become high priests of their
religion. They deserve not just with respect, but awe. Great
mathematicians and physical scientists are therefore special beings,
like saints. As such, they can communicate with the angels--the
extraterrestrial scientists and mathematicians of superior
intelligence.

It is unlikely that most people will give up the soul on the basis of
what is known about cognitive science and neuroscience. It is too much
part of who they are. It is part of a concept of self-identity that is
physically in their brains and not likely to change.

It is just as unlikely that most mathematicians and physical
scientists will give up on the Romance and their own religious
identity just because cognitive scientists and neuroscientists have
found that the Romance is scientifically untenable. The Romance is
also part of their understanding of who they are, and as such, is
physically instantiated in the brains of many mathematicians and
scientists. That is why they believe in extra-terrestrials and why
that belief is not likely to change.

Will cognitive science change the way we think?
It changed the way I think, but not without a struggle to overcome the
views I had grown up with.

George Lakoff is Professor of Linguistics at the University of
California at Berkeley and author of Where Mathematics Comes From
(with Rafael Núñez).
________________________________________________________________

"Do 'folk concepts' of the mind have anything to do with what really
happens in the brain?''
When we speak about our experiences, we use terms like emotion,
perception, thought, action, motivation, attention, free will. And
these concepts have been the starting point for research and
speculation about the brain. But now the evidence is starting to mount
that our categories don't fit what's really going on, as far as we can
measure and describe. It may turn out that the differences between a
thought and an emotion, a perception and an action, a mood and a
belief, are part of our tradition of "folk psychology" -- the things
we tell ourselves to explain the world in ordinary conversation.

For hundreds of years the pattern in science has been to overturn folk
concepts, and it seems to me the brain may be the next field for such
a conceptual revolution. It may be that in a hundred years people will
speak of free will, or the unconscious, or emotion, in the way that we
now speak of "sunrise" or "forever" -- words that serve for day-to-day
talk, but don't map reality. We know the sun doesn't rise because it
is the earth that moves and we know that humanity and its planet and
the universe itself won't last forever. I see signs that concepts of
the mind are due for the same sort of revision. And so that's the
question I keep returning to.

David Berreby writes about science and culture, His work has appeared
in The New York Times Magazine, The New Republic, Slate, The Sciences
and many other publications.
________________________________________________________________

"Will non-sustainable developments (i.e., atmospheric change,
deforestation, fresh water us, etc.) become halted in pleasant ways of
our choice, or in unpleasant ways not of our choice?"
To my mind, by far the most important question concerns the way in
which our currently non-sustainable course gets resolved in the next
several decades. Our present course with regards to many of our
demands on the environment cannot be sustained for more than several
decades. Those demands include atmospheric change, deforestation,
fresh water use, global warming, overfishing, production of toxic
materials, utilization of available photosynthetic capacity, and
utilization of topsoil. Hence the interesting question is whether
these non-sustainable developments become halted in pleasant ways of
our choice, or in unpleasant ways not of our choice.

Jared M. Diamond is Professor of Physiology at the UCLA School of
Medicine, is the Pulitzer Prize-winning author of the widely acclaimed
Guns, Germs, and Steel: the Fates of Human Societies.


More information about the paleopsych mailing list