[Paleopsych] Edge: The World Question Center 2005

Premise Checker checker at panix.com
Fri Jan 7 20:44:06 UTC 2005


Howard,
For your leisurely weekend reading. I draw specific attention to the 
responses by Zimbardo (mentions the Lucifer Effect), Alun Anderson, 
Pentland, Lanier, Margolis, Harris, Blackmore, Seligman, and Gopnik.

I'm going to suggest to Brockman for 2006, "What would it take for you to 
reverse your three most cherished beliefs?"

Happy Gregorian calendar new year!

------------

The World Question Center 2005
http://edge.org/q2005/q05_easyprint.html
    ______________________________________________________________________

      "What Do You Believe Is True Even Though You Cannot Prove It?"
    ______________________________________________________________________


    God (or Not), Physics and, of Course, Love: Scientists Take a Leap:
    Fourteen scientists ponder everything from string theory to true love.


    Space Without Time, Time Without Rest: John Brockman's Question for
    the Republic of Wisdom--It can be more thrilling to start the New Year
    with a good question than with a good intention. That's what John
    Brockman is doing for the eight time in a row.


    What do you believe to be true, even though you can't prove it? John
    Brockman asked over a hundred scientists and intellectuals... more»
    ... Edge

    Great minds can sometimes guess the truth before they have either
    the evidence or arguments for it (Diderot called it having the "esprit
    de divination"). What do you believe is true even though you cannot
    prove it?
    ______________________________________________________________________

    The 2005 Edge Question has generated many eye-opening responses from a
    "who's who" of third culture scientists and science-minded thinkers.
    The 120 contributions comprise a document of 60,000 words.

    The New York Times ("Science Times") and Frankfurter Allgemeine
    Zeitung ("Feuilliton") have been granted rights to publish excepts in
    their print and online editions simultaneously with Edge publication.
    The editors of "Science Times" and "Feuilliton", respectively, are
    making their own selections. The Italian newspaper, Il Sole 24 Ore
    will follow on Sunday, January 9th.

    This year there's a focus on consciousness, on knowing, on ideas of
    truth and proof. If pushed to generalize, I would say it is a
    commentary on how we are dealing with the idea of certainty.

    We are in the age of "searchculture", in which Google and other search
    engines are leading us into a future rich with an abundance of correct
    answers along with an accompanying naïve sense of certainty. In the
    future, we will be able to answer the question, but will we be bright
    enough to ask it?

    This is an alternative path. It may be that it's okay not to be
    certain, but to have a hunch, and to perceive on that basis. There is
    also evidence here that the scientists are thinking beyond their
    individual fields. Yes, they are engaged in the science of their own
    areas of research, but more importantly they are also thinking deeply
    about creating new understandings about the limits of science, of
    seeing science not just as a question of knowing things, but as a
    means of tuning into the deeper questions of who we are and how we
    know.

    It may sound as if I am referring to a group of intellectuals, and not
    scientists. In fact, I refer to both. In 1991, I suggested the idea of
    a third culture, which "consists of those scientists and other
    thinkers in the empirical world who, through their work and expository
    writing, are taking the place of the traditional intellectual in
    rendering visible the deeper meanings of our lives, redefining who and
    what we are. "

    I believe that the scientists of the third culture are the pre-eminent
    intellectuals of our time. But I can't prove it.

    Happy New Year!

    John Brockman
    Publisher & Editor
    ______________________________________________________________________

        This year's Edge Question was suggested by Nicholas Humphrey.
    ______________________________________________________________________

                                 CONTRIBUTORS
    ______________________________________________________________________

    [280]IAN McEWAN
    Novelist; Author, Saturday
    [mcewan100.jpg] What I believe but cannot prove is that no part of my
    consciousness will survive my death. I exclude the fact that I will
    linger, fadingly, in the thoughts of others, or that aspects of my
    consciousness will survive in writing, or in the positioning of a
    planted tree or a dent in my old car. I suspect that many contributors
    to Edge will take this premise as a given--true but not significant.
    However, it divides the world crucially, and much damage has been done
    to thought as well as to persons, by those who are certain that there
    is a life, a better, more important life, elsewhere. That this span is
    brief, that consciousness is an accidental gift of blind processes,
    makes our existence all the more precious and our responsibilities for
    it all the more profound.

                                                   |[281]back to contents|
    ______________________________________________________________________

    [282]ROBERT TRIVERS
    Evolutionary biologist, Rutgers University; Author, Natural Selection
    and Social Theory

    [trivers100.jpg] Think true, cannot prove.
    I believe that deceit and self deception play a disproportinate role
    in human-generated disasters, including misguided wars, international
    affairs more gnerally, the collapse of civilizations, and state
    affairs, including disastrous social, political and economic policies
    and miscarriages of justice.
    I believe deceit and self deception play an important role in the
    relative
    underdevelopment of the social sciences.
    I believe that processes of self deception are important in limiting
    the
    achievement of individuals.

                                                   |[283]back to contents|
    ______________________________________________________________________

    [284]IAN WILMUT
    Biologist; Cloning Researcher; Roslin Institute, Edinburgh; Coauthor,
    The Second Creation

    [Wilmut100.jpg] I believe that it is possible to change adult cells
    from one phenotype to another.

    The birth of Dolly provided the insight behind this belief. She was
    the first adult cloned from another adult, of any species. Previously
    biologists had believed that the mechanisms that direct the formation
    of all of the different tissues that make up an adult were so complex
    and so rigidly fixed that they could not be reversed. Her birth
    demonstrated that the mechanisms that were active in the nucleus
    transferred from the mammary epithelial cell could be reversed by
    unknown factors in the recipient unfertilised egg.

    We take for-granted the process by which the single cell embryo at
    fertilisation gives rise to all of the many tissues of an adult. As
    almost all of those cells have the same genetic information, the
    changes must be brought about by sequential differences in function of
    the genes. An impression is beginning to emerge of the factors that
    bring about these sequential changes, although much more remains to be
    learned. In particular, very little is known of the hierarchy of
    influence of the several regulatory factors.

    I believe that a greater understanding of these mechanisms will allow
    us to cause cells from one tissue to form another different tissue. We
    have long been accustomed to the idea that cells are influenced by
    their external environment and use specific methods of tissue culture
    to control their function in the laboratory. The new research
    introduces an additional dimension. We will learn how to increase the
    activity of the intracellular factors to achieve our aims. This may be
    by direct introduction of the proteins, use of small molecule drugs to
    modulate expression of regulatory genes or transient expression of
    those key genes. We have much to learn about the optimal approach to
    transdifferentiation. Is it necessary to reverse the process of
    differentiation to an early stage in the same pathway? Or is it
    possible to achieve change directly from one path to another? The
    answer may vary from one tissue to another.

    The medical implications will be profound. Cells of specific tissues
    will be available from patients either for research to understand
    genetic differences or for their therapy, This is not to suggest that
    we cease research on embryo stem cells because knowledge from their
    use will be essential to develop the new approaches that I envisage.
    Conversely, understanding of the mechanisms of reprogramming cells
    will create important new opportunities in the use of embryo stem
    cells. As many options as possible should be available to the
    researcher and clinician.

    It is my belief that, ultimately, this approach to tissue formation
    will be the greatest inheritance of the Dolly experiment. The
    ramifications are far wider than those that involve the production of
    cloned offspring.

                                                   |[285]back to contents|
    ______________________________________________________________________

    [286]ANTON ZEILINGER
    [zeilinger100.jpg] What I believe but cannot prove is that quantum
    physics teaches us to abandon the distinction between information and
    reality.

    The fundamental reason why I believe in this is that it is impossible
    to make an operational distinction between reality and information. In
    other words, whenever we make any statement about the world, about any
    object, about any feature of any object, we always make statements
    about the information we have. And, whenever we make scientific
    predictions we make statements about information we possibly attain in
    the future. So one might be tempted to believe that everything is just
    information. The danger there is solipsism and subjectivism. But we
    know, even as we cannot prove it, that there is reality out there. For
    me the strongest argument for a reality independent of us is the
    randomness of the individual quantum event, like the decay of a
    radioactive atom. There is no hidden reason why a given atom decays at
    the very instant it does so.

    So if reality exists and if we will never be able to make an
    operational distinction between reality and information, the
    hypothesis suggests itself that reality and information are the same.
    We need a new concept which encompasses both. In a sense, reality and
    information are the two sides of the same coin.

    I feel that this is the message of the quantum. It is the natural
    extension of the Copenhagen interpretation. Once you adopt the notion
    that reality and information are the same all quantum paradoxes and
    puzzles disappear, like the measurement problem or Schrödinger's cat.
    Yet the price to pay is high. If my hypothesis is true, many questions
    become meaningless. There is no sense then to ask, what is "really"
    going on out there. Schrödinger's cat is neither dead nor alive unless
    we obtain information about her state.

    By the way, I also believe that some day all computers will be quantum
    computers. The reason I believe this is the ongoing miniaturization of
    electronic components. And, certainly, we will learn to overcome
    decoherence. We will learn how to observe quantum phenomena outside
    the shielded environment of laboratories. I hope I will still be alive
    when this happens.

                                                   |[287]back to contents|
    ______________________________________________________________________

    [288]JARED DIAMOND
    Biologist; Geographer, UCLA; Author, Collapse

    [diamond100.jpg] When did humans complete their expansion around the
    world? I'm convinced, but can't yet prove, that humans first reached
    the continents of North America, South America, and Australia only
    very recently, at or near the end of the last Ice Age. Specifically,
    I'm convinced that they reached North America around 14,000 years ago,
    South America around 13,500 years ago, and Australia and New Guinea
    around 46, 000 years ago; and that humans were then responsible for
    the extinctions of most of the big animals of those continents within
    a few centuries of those dates; and that scientists will accept this
    conclusion sooner and less reluctantly for Australia and New Guinea
    than for North and South America.
    Background to my conjecture is that there are now hundreds of
    thousands of sites with undisputed evidence of human presence dating
    back to millions of years ago in Africa, Europe, and Asia, but none
    with even disputed evidence of human presence over 100,000 years ago
    in the Americas and Australia. In the Americas, undisputed evidence
    suddenly appears in all the lower 48 U.S. states around 14,000 years
    ago, at numerous South American sites soon thereafter, and at hundreds
    of Australian sites between 46,000 and 14,000 years ago. Evidence of
    most of the former big mammals of those continents--e.g., elephants
    and lions and giant ground sloths in the Americas, giant kangaroos and
    one-ton Komodo dragons in Australia--disappears within a few centuries
    of those dates. The transparent conclusion: people arrived then,
    quickly filled up those continents, and easily killed off their big
    animals that had never seen humans and that let humans walk up to
    them, as Galapagos and Antarctica animals still do today.
    But some Australian archaeologists, and many American archaeologists,
    resist this obvious conclusion, for several reasons. Archaeologists
    try hard to find convincing earlier sites, because it would be a
    dramatic discovery. Every year, discoveries of many purportedly older
    sites are announced, then to be forgotten. As the supporting evidence
    dissolves or remains disputed, we're now in a steady state of new
    claims and vanishing old claims, like a hydra constantly sprouting new
    heads. There are still a few sites known for the Americas with
    evidence of human butchering of the extinct big animals, and none
    known for Australia and New Guinea--but one expects to find very few
    sites anyway, among all the sites of natural deaths for hundreds of
    thousands of years, if the hunting was all finished locally (because
    the prey became extinct) within a few decades. American archaeologists
    are especially persistent in their quest for pre-14,000 sites--perhaps
    because secured dating requires use of multiple dating techniques (not
    just radiocarbon), but American archaeologists distrust alternatives
    to radiocarbon (discovered by U.S. scientists) because the alternative
    dating techniques were discovered by Australian scientists.
    Every year, beginning graduate students in archaeology and
    paleontology, working in Africa or Europe or Asia, go out and discover
    undisputed new sites with ancient human presence. Every year, new such
    discoveries are announced to the other three continents, but none has
    ever met the requirements of evidence accepted for Africa, Europe, or
    Asia. The big animals of the latter three continents survive, because
    they had millions of years to learn fear of human hunters with very
    slowly evolving skills; most big animals of the former three
    continents didn't survive, because they had the misfortune that their
    first encounter with humans was a sudden one, with fully modern
    skilled hunters.
    To me, the case is already proved. How many more decades of
    unconvincing claims will it take to convince the holdouts among my
    colleagues? I don't know. It makes better newspaper headlines to
    report "Wow!! New discovery overturns the established paradigm of
    American archaeology!!" than to report, "Ho hum, yet another
    reportedly paradigm-overturning discovery fails to hold up."

                                                   |[289]back to contents|
    ______________________________________________________________________

    [290]DANIEL GOLEMAN
    Psychologist; Author, Emotional Intelligence

    [goleman100.jpg] I believe, but cannot prove, that today's children
    are unintended victims of economic and technological progress.
    To be sure, greater wealth and advanced technology offers all of us
    better lives in many ways. Yet these unstoppable forces seem to have
    had some disastrous results in how they have been transforming
    childhood. Even as children's IQs are on a steady march upward over
    the last century, the last three decades have seen a major drop in
    children's most basic social and emotional skills--the very abilities
    that would make them effective workers and leaders, parents and
    spouses, and members of the community.

    Of course there are always individual exceptions--children who grow up
    to be outstanding human beings. But the Bell Curve for social and
    emotional abilities seems to be sliding in the wrong direction. The
    most compelling data comes from a random national sample of more than
    3,000 American children ages seven to sixteen--chosen to represent the
    entire nation--rated by their parents and teachers, adults who know
    them well. First done in the early 1970s, and then roughly fifteen
    years later, in the mid-80s, and again in the late 1990s, the results
    showed a startling decline.

    The most precipitous drop occurred between the first and second
    cohorts: American children were more withdrawn, sulky and unhappy,
    anxious and depressed, impulsive and unable to concentrate, delinquent
    and aggressive. Between the early 1970s and the mid-80s, they did more
    poorly on 42 indicators, better on none. In the late 1990s, scores
    crept back up a bit, but were nowhere near as high as they had been on
    the first round, in the early 70s.

    That's the data. What I believe, but can't prove, is that this decline
    is due in large part to economic and technological forces. For one,
    the ratcheting upward of global competition means that over the last
    two decades or so each generation of parents has had to work longer to
    maintain the same standard of living that their own parents
    had--virtually every family has two working parents today, while 50
    years ago the norm was only one. It's not that today's parents love
    their children any less, but that they have less free time to spend
    with them than was true in their parents' day.

    Increasing mobility means that fewer children live in the same
    neighborhood as their extended families--and so no longer have
    surrogate parenting from close relatives. Day care can be excellent,
    particularly for children of privileged families, but too often means
    less well-to-do children get too little caring attention in their day.

    For the middle class, childhood has become overly organized, a tight
    schedule of dance or piano lessons and soccer games, children shuttled
    from one adult-run activity to another. This has eroded the free time
    in which children can play together on their own, in their own way.

    When it comes to learning social and emotional skills, I suspect the
    lessoning of open time with family, relatives and other children
    translates into a loss of the very activities that have traditionally
    allowed the natural transmission of these skills.

    Then there's the technological factor. Today's children spend more
    time than ever in human history alone, staring at a video monitor.
    That amounts to a natural experiment in childrearing on an
    unprecedented scale. While this may mean children as adults will be
    more at ease with their computers, I doubt it does anything but
    de-skill them when it comes to relating to each other
    person-to-person.

    We know that the prefrontal-limbic neural circuitry crucial for social
    and emotional abilities is the last part of the human brain to become
    anatomically mature, not finishing this developmental task until the
    mid-20s. During that window, children's life abilities become set as
    neurons come online and are interconnected for better or for worse. A
    child's experiences dictate how those connections are made.
    A smart strategy for helping every child get the right social and
    emotional skill-building would be to bring such lessons into the
    classroom rather than leaving it to chance. My hunch, which I can't
    prove, is that this offers the best way to keep children from paying
    of modern life for us all.

                                                   |[291]back to contents|
    ______________________________________________________________________

    [292]MARTI HEARST
    Computer Scientist, UC Berkeley, School of Information Management &
    Systems
    [hearst100.jpg] The Search Problem is solvable.
    Advances in computational linguistics and user interface design will
    eventually enable people to find answers to any question they have, so
    long as the answer is encoded in textual form and stored in a publicly
    accessible location. Advances in reasoning systems will to a limited
    degree be able to draw inferences in order to find answers that are
    not explicitly present in the existing documents.
    There have been several recent developments that prompt me to make
    this claim. First, computational linguistics (also known as natural
    language processing or language engineering) has made great leaps
    forward in the last decade, due primarily to advances stemming from
    the availability of huge text collections, from which statistics can
    be derived. Today's automatic language translation systems, for
    example, are now derived almost entirely from statistical patterns
    extracted from text collections. They now work as well as
    hand-engineered systems, and promise to continue to improve. As
    another example, recent government-sponsored research in the area of
    (simple) question answering has produced a radical leap forward in the
    quality of results in this arena.
    Of course, another important development is the rise of the Web and
    its most voracious consumer, the internet search engine. It is common
    knowledge that search engines make use of information associated with
    link structure to improve results rankings. But search engine
    companies also have enormous, albeit somewhat impoverished,
    repositories of information about how people ask for information. This
    behavioral information can be used to build better search tools. For
    example, some spelling correction algorithms make use of how people
    have corrected erroneous spellings in the past, by observing pairs of
    queries that occur one after the next. The second query is assumed to
    be the correction, if it is sufficiently similar to the first.
    Patterns are then derived that convert from different types of
    misspellings to their corrections.
    Another development in the field of computational linguistics is the
    manual creation of enormous lexical ontologies, which are then used to
    build axioms and rules about language use. These modern ontologies,
    unlike their predecessors, are of a large enough scale and simple
    enough design to be useful, although this work is in the early stages.
    There are also many attempts to build such ontologies automatically
    from large text collections; the most promising approach seems to be
    to combine the automated and the manual approaches.
    As a side note, I am skeptical about the hype surrounding the Semantic
    Web--it is very difficult to characterize concepts in a systematic
    way, and even more so to force all the world's creators of information
    to conform to one schema. Automated analysis tools adapt to what
    people really do, rather than try to force people's expressions of
    information to conform to a standard.
    Finally, advances in user interface design are key to producing better
    search results. The search field has learned an enormous amount in the
    ten years since the Web became a major presence in society, but as is
    often noted in the field, the interface itself hasn't changed much:
    after all this time, we still type words into a blank box and then
    select from a list of results. Experience shows that a search
    interface has to be a qualitative leap better than the standard in
    order to entice people to switch. I believe headway will be made in
    this area, most likely occurring in tandem with advances in natural
    language analysis.
    It may well be the case that advances in audio, image, and video
    processing will keep pace with those of language analysis, thus making
    possible the answering of questions that can be answered by
    information stored in graphical and audio form. However, my expertise
    does not extend to these fields, so I will not make a claim about
    this.

                                                   |[293]back to contents|
    ______________________________________________________________________

    [294]TIMOTHY TAYLOR
    Archaeologist, University of Bradford; Author, The Buried Soul

    [taylor100.jpg] "All your life you live so close to the truth, it
    becomes a permanent blur in the corner of your eye, and when something
    nudges it into outline it is like being ambushed by a grotesque" wrote
    Tom Stoppard in Rosencrantz and Guildenstern are Dead. Something I
    believe is true even though I cannot prove it, is that both
    cannibalism and slavery were prevalent in human prehistory. Neither
    belief commands specialist academic consensus and each phenomenon
    remains highly controversial, their empirical "signatures" in the
    archaeological record being ambiguous and fugitive.

    Truth and belief are uncomfortable words in scholarship. It is
    possible to define as true only those things that can be proved by
    certain agreed criteria. In general, science does not believe in truth
    or, more precisely, science does not believe in belief. Understanding
    is understood as the best fit to the data under the current limits
    (both instrumental and philosophical) of observation. If science
    fetishized truth, it would be religion, which it is not. However, it
    is clear that under the conditions that Thomas Kuhn designated as "
    normal science" (as opposed to the intellectual ferment of paradigm
    shifts) most scholars are involved in supporting what is, in effect, a
    religion. Their best guesses become fossilized as a status quo, and
    the status quo becomes an item of faith. So when a scientist tells you
    that "the truth is . . .", it is time to walk away. Better to find a
    priest.

    Until recently, most archaeologists would be inclined to say that the
    truths about cannibalism and about slavery are that each has been
    sharply historically limited and that each is a more or less aberrant
    cultural phenomenon. The reason for such a belief is that it is only
    in a small number of cases that either thing be proved beyond
    reasonable doubt. But I see the problem in the starting point.

    If we shift our background expectations and say that coercing a living
    person to do one's bidding is perhaps the very first form of property
    ownership ("the slavery latent in the family" to use Marx and Engels'
    telling phrase), and that eating the dead (as very many wild
    vertebrates do) makes sense in nutritional and competitive terms, then
    the archaeologist's duty is to empirically establish those times and
    places where slavery and cannibalism had ceased to exist. The only
    reason we have hitherto insisted on proof-positive rather than
    proof-negative in relation to these phenomena is that both seem
    grotesque to us now, and we have rather a high opinion of our natural
    civility. This is the most interesting point, and the focus of my
    attention is how culturally-elaborated mechanisms of restraint and
    inter-personal respect emerged and allowed such refined scruples.

                                                   |[295]back to contents|
    ______________________________________________________________________

    [296]RANDOLPH NESSE, M.D.
    Psychiatrist, University of Michigan; Coauthor, Why We Get Sick

    [nesse100.jpg] I can't prove it, but I am pretty sure that people gain
    a selective advantage from believing in things they can't prove. I am
    dead serious about this. People who are sometimes consumed by false
    beliefs do better than those who insist on evidence before they
    believe and act. People who are sometimes swept away by emotions do
    better in life than those who calculate every move. These advantages
    have, I believe, shaped mental capacities for intense emotion and
    passionate beliefs because they give a selective advantage in certain
    situations.

    I am not advocating for irrationality or extreme emotionality. Many,
    perhaps even most problems of individuals and groups arise from
    actions based on passion. The Greek initiators and Enlightenment
    implementers recognized correctly that the world would be better off
    if reason displaced superstition and crude emotion. I have no interest
    in going back on that road and fundamentalism remains a severe threat
    to enlightened civilization. I am arguing, however, that if we want to
    understand these tendencies we need to quit dismissing them as defects
    and start considering how they came to exist.
    I came to this belief from seeing psychiatric patients while studying
    game theory and evolutionary biology. Many patients are consumed by
    fears, sadness, and other emotions they find painful and senseless.
    Others are crippled by grandiose fantasies or bizarre beliefs. On the
    other side are those with obsessive compulsive personality. They do
    not have obsessive compulsive disorder; they do not wash and count all
    day. They have obsessive compulsive personality characterized by
    hyper-rationality. They are mystified by other people's emotional
    outbursts. They do their duty and expect others will too. They are
    often disappointed in this, giving rise to frequent resentment if not
    anger. They trade favors according to the rules, and they can't fathom
    genuine generosity or spiteful hatred.

    People who lack passions suffer several disadvantages. When social
    life results in situations that can be mapped onto game theory,
    regular predictable behavior is a strategy inferior to allocating
    actions randomly among the options. The angry person who might seek
    spiteful revenge is a force to be reckoned with, while a sensible
    opponent can be easily dealt with. The passionate lover sweeps away a
    superior but all too practical offer of marriage.

    It is harder to explain the disadvantages suffered by people who lack
    a capacity for faith, but consider the outcomes for those who wait for
    proof before acting, compared to the those who act on confident
    conviction. The great things in life are done by people who go ahead
    when it seems senseless to others. Usually they fail, but sometimes
    they succeed.

    Like nearly every other trait, tendencies for passionate emotions and
    irrational convictions are most advantageous in some middle range. The
    optimum for modern life seems to me to be quite a ways towards the
    rational side of the median, but there are advantages and
    disadvantages at every point along the spectrum. Making human life
    better requires that we understand these capacities, and to do that we
    must seek their origins and functions. I cannot prove this is true,
    but I believe it is. This belief spurs my search for evidence which
    will either strengthen my conviction or, if I can discipline my mind
    sufficiently, convince me that it is false.

                                                   |[297]back to contents|
    ______________________________________________________________________

    [298]STEPHEN H. SCHNEIDER
    Biologist; Climatologist, Stanford University; Author, Laboratory
    Earth

    [Schneider100.jpg] I believe that global warming is both a real
    phenomenon and at least partially a result of human activities such as
    dumping greenhouse gases in the atmosphere. In fact I can "prove
    it"--or can I?--that is the real question.

    What is "proof"? In the strict old fashioned frequentist statistical
    belief system data is direct observations of the hypothesized
    phenomena--temperature increases in my case--and when you get enough
    of it to produce frequency distributions you can assign objective
    probabilities to cause and effect hypotheses. But what if the events
    cannot be precisely measured, or worse, apply to future events like
    the warming of the late 21st century? Then a frequentist
    interpretation of " proof" is impossible in principle before the fact,
    and we instead become subjectivists--Bayesian updaters as some
    statisticians like to refer to it. In this case we use frequency data
    and all other data relevant to components of our analysis to form a
    "prior"--a belief about likelihood of an event or process. Then as we
    learn more we update our belief--an "a posteriori probability" as the
    Bayesians call it--or simply a revised prior.
    It is my strong belief that there is an overwhelming amount of
    evidence to form a subjective prior with high confidence that the
    earth's surface has warmed over the past century about 0.7 deg C or so
    and that at least half of the more recent warming is traceable to
    human pressures. Is this " proof" of anthropogenic (i.e., we did it)
    warming? Not in the strict sense of a criminal trial with "beyond a
    reasonable doubt" criterion--say a 99% objective probability. But in
    the sense of a civil proceeding, where " preponderance of evidence" is
    the standard and a likelihood much greater than 50% is adequate to
    have a case, then global warming is indeed already " proved". So as a
    frequentist I concede I believe it is real without full "proof", but
    as a subjectivist, my reading of the many lines of evidence puts
    global warming well over the minimum thresholds of belief to assert it
    is already "proved".

                                                   |[299]back to contents|
    ______________________________________________________________________

    [300]BRIAN GOODWIN
    Biologist, Schumacher College, Devon, UK; Author, How The Leopard
    Changed Its Spots

    [goodwin100.jpg] Nature Is Culture.

    I believe that nature and culture can now be understood as one unified
    process, not two distinct domains separated by some property of humans
    such as written or spoken language, consciousness, or ethics. Although
    there is no proof of this, and no consensus in the scientific
    community or in the humanities, the revelations of the past few years
    provide a foundation for both empirical and conceptual work that I
    believe will lead to a coherent, unified perspective on the process in
    which we and nature are engaged. This is not a take-over of the
    humanities by science, but a genuine fusion of the two based on clear
    articulations of basic concepts such as meaning and wholeness in
    natural and cultural processes, with implications for scientific
    studies, their applications in technology and their expression in the
    arts.

    For me this vision has arisen primarily through developments in
    biology, which occupies the middle ground between culture and the
    physical world. The key conceptual changes have arisen from complexity
    theory through detailed studies of the networks of interactions
    between components within organisms, and between them in ecosystems.
    When the genome projects made it clear that we are unable to make
    sense of the information in DNA, attention necessarily shifted to
    understanding how organisms use this in making themselves with forms
    that allow them to survive and reproduce in particular habitats. The
    focus shifted from the hereditary material to its organised context,
    the living cell, so that organisms as agencies with a distinctive kind
    of organisation returned to the biological foreground.

    Examination of the self-referential networks that regulate gene
    activities in organisms, that carry out the diverse functions and
    constructions within cells through protein-protein interactions (the
    proteome), and the sequences of metabolic transformations that make up
    the metabolome, have revealed that they all have distinctive
    properties of self-similar, fractal structure governed by power-law
    relationships. These properties are similar to the structure of
    languages, which are also self-referential networks described by
    power-laws, as discovered years ago by G.K. Zipf. A conclusion is that
    organisms use proto-languages to make sense of both their inherited
    history (written in DNA and its molecular modifications) and their
    external contexts (the environment) in the process of making
    themselves as functional agencies. Organisms thus become participants
    in cultures with histories that have meaning, expressed in the forms
    (morphologies and behaviours) distinctive to their species. This is of
    course embodied or tacit meaning, which cognitive scientists now
    recognise as primary in human culture also.

    Understanding species as cultures that have experienced 3.7 billion
    years of adaptive evolution on earth makes it clear that they are
    repositories of meaningful knowledge and experience about effective
    living that we urgently need to learn about in human culture. Here is
    a source of deep wisdom about living in participation with others that
    is energy and resource efficient, that recycles everything, produces
    forms that are simultaneously functional and beautiful, and is
    continuously innovative and creative. We can now proceed with a
    holistic science that is unified with the arts and humanities and has
    at its foundation the principles that arise from a naturalistic ethic
    based on an extended science that includes qualities as well as
    quantities within the domain of knowledge.

    There is plenty of work to do in articulating this unified
    perspective, from detailed empirical studies of the ways in which
    organisms achieve their states of coherence and adaptability to the
    application of these principles in the organic design of all human
    artefacts, from energy-generating devices and communication systems to
    cars and factories. The goal is to make human culture as integrated
    with natural process as the rest of the living realm so that we
    enhance the quality of the planet instead of degrading it. This will
    require a rethinking of evolution in terms of the intrinsic agency
    with meaning that is embodied in the life cycles of different species,
    understood as natural cultures.

    Integrating biology and culture with physical principles will be
    something of a challenge, but there are already many indications of
    how this can be achieved, without losing the thread of language and
    meaning that runs through living nature. The emphasis on wholeness
    that lies at the heart of quantum mechanics and its extensions in
    quantum gravity, together with the subtle order revealed as quantum
    coherence, is already stimulating a rethinking of the nature of
    wholeness, coherence and robust adaptability in organisms as well as
    quality of life in cultures. Furthermore, the self-similar, fractal
    patterns that arise in physical systems during phase transitions, when
    new order is coming into being, have the same characteristics as the
    patterns observed in organismic and cultural networks involved in
    generating order and meaning. The unified vision of a creative and
    meaningful cosmic process seems to be on the agenda as a replacement
    for the meaningless mechanical cosmos that has dominated Western
    scientific thought and cultural life for a few hundred years.

                                                   |[301]back to contents|
    ______________________________________________________________________

    [302]TERRENCE SEJNOWSKI
    Computational Neuroscientist, Howard Hughes Medical Institute;
    Coauthor, The Computational Brain

    [sejnowski100.jpg] How do we remember the past? There are many answers
    to this question, depending on whether you are an historian, artist or
    scientist. As a scientist I have wanted to know where in the brain
    memories are stored and how they are storedthe genetic and neural
    mechanisms. Although neuroscientists have made tremendous progress in
    uncovering neural mechanisms for learning, I believe, but cannot
    prove, that we are all looking in the wrong place for long-term
    memory.

    I have been puzzled by my ability to remember my childhood, despite
    the fact that most of the molecules in my body today are not the same
    ones I had as a childin particular, the molecules that make up my
    brain are constantly turning over, being replaced with newly minted
    molecules. Perhaps memories only seem to be stable. Rehearsal
    strengthens memories, and can even alter them. However, I have
    detailed memories of specific places where I lived 50 years ago that I
    doubt I ever rehearsed but can be easily verified, so the stability of
    long-term memories is a real problem.

    Textbooks in neuroscience, including one that I coauthored, say that
    memories are stored at synapses between neurons in the brain, of which
    there are many. In neural network models of memory, information can be
    stored by selectively altering the strengths of the synapses, and
    "spike-time dependent plasticity" at synapses in the cerebral cortex
    has been found with these properties. This is a hot area of research,
    but all we need to know here is that patterns of neural activity can
    indeed modify a lot of molecular machinery inside a neuron.

    If memories are stored as changes to molecules inside cells, which are
    constantly being replaced, how can a memory remain stable over 50
    years? My hunch is that everyone is looking in the wrong place: that
    the substrate of really old memories is located not inside cells, but
    outside cells, in the extracellular space. The space between cells is
    not empty, but filled with a matrix of tough material that is
    difficult to dissolve and turns over very slowly if at all. The
    extracellular matrix connects cells and maintains the shape of the
    cell mass. This is why scars on your body haven't changed much after
    decades of sloughing off skin cells.

    My intuition is based on a set of classic experiments on the
    neuromuscular junction between a motor neuron and a muscle cell, a
    giant synapse that activates the muscle. The specialized extracellular
    matrix at the neuromuscular junction, called the basal lamina,
    consists of proteoglycans, glycoproteins, including collagen, and
    adhesion molecules such as laminin and fibronectin. If the nerve that
    activates a muscle is crushed, the nerve fiber grows back to the
    junction and forms a specialized nerve terminal ending. This occurs
    even if the muscle cell is also killed. The memory of the contact is
    preserved by the basal lamina at the junction. Similar material exists
    at synapses in the brain, which could permanently maintain overall
    connectivity despite the coming and going of molecules inside neurons.

    How could we prove that the extracellular matrix really is responsible
    for long-term memories? One way to disprove it would be to disrupt the
    extracellular matrix and see if the memories remain. This can be done
    with enzymes or by knocking out one or more key molecules with
    techniques from molecular genetics. If I am right, then all of your
    memorieswhat makes you a unique individualare contained in the
    endoskeleton that connects cells to each other. The intracellular
    machinery holds memories temporarily and decides what to permanently
    store in the matrix, perhaps while you are sleeping. It might be
    possible someday to stain this memory endoskeleton and see what
    memories look like.

                                                   |[303]back to contents|
    ______________________________________________________________________

    ALEXANDER VILENKIN
    Physicist; Institute of Cosmology, Tufts University

    [vilenkin100.jpg] There are good reasons to believe that the universe
    is infinite.
    If so, it contains an infinite number of regions of the same size as
    our observable region (which is 80 billion light years across). It
    follows from quantum mechanics that the number of distinct histories
    that could occur in any of these finite regions in a finite time
    (since the big bang) is finite. By history I mean not just the history
    of the civilization, but everything that happens, down to the atomic
    level. The number of possible histories is fantastically large (it has
    been estimated as 10 to the power 10150), but the important point is
    that it is finite.

    Thus, we have an infinite number of regions like ours and only a
    finite number of histories that can play out in them. It follows that
    every possible history will occur in an infinite number of regions. In
    particular, there should be an infinite number of regions with
    histories identical to ours. So, if you are not satisfied with the
    result of the presidential elections, don't despair: you candidate has
    won on an infinite number of earths.

    This picture of the universe robs our civilization of any claim for
    uniqueness: countless identical civilizations are scattered in the
    infinite expanse of the cosmos. I find this rather depressing, but it
    is probably true.

    Another thing that I believe to be true, but cannot prove, is that our
    part of the universe will eventually stop expanding and will
    recollapse to a big crunch. But this will happen no sooner than 20
    billion years from now, and probably much later.

                                                   |[304]back to contents|
    ______________________________________________________________________

    [305]OLIVER MORTON
    Writer; Contributing Editor, Wired, Newsweek International; Author,
    Mapping Mars
    [morton100.jpg] I've always found belief a bit difficult; people tend
    to assume that I have rather strong beliefs, but I don't experience
    them in that way. As far as knowledge goes I'm a consumer, and
    sometimes a distributor, not a producer; most of what I believe to be
    true lies far beyond my capacity for proof, and I try to moderate the
    timbre of my belief accordingly. I know that almost all my beliefs are
    based on faith in people, and processes, and institutions, and their
    various capacities for correcting themselves when in error.

    I think the same is true for most of us; those who can prove their
    beliefs in their field of expertise are still reliant on faith in
    others when it comes to other fields. To acknowledge this at all times
    is not possible--it would make every utterance tentative, encrust
    every concept with ceteris paribus clauses. But when faced with a
    question like this, the role of our faith in people and in social
    institutions has to be acknowledged. And it does no harm to
    acknowledge it now and then even when not faced with such a question,
    in order to reinforce the need to keep people, institutions and the
    processes of knowledge production held in helpful scrutiny.
    Which I suppose means that, for me, the real question is what do I
    believe that I don't think anyone can prove. In answer I'd put forward
    the belief that there is a future much better, in terms of reduced
    human suffering and increased human potential, than the present, and
    that one part of what makes it better is a greater, subtler knowledge
    of the world at large.
    If I can't prove this, why do I believe it? Because it's better than
    believing the alternative. Because it provides a context for social
    and political action that would otherwise be futile; in this, it is an
    exhortatory belief. It is also, in part, a self-serving one, in that
    it suggests that by trying to clarify and disseminate knowledge (a
    description that makes me sound like the chef at a soup kitchen) I'm
    doing something that helps the better future, if only a bit.
    Besides the question of why, though, there's the question of how. And
    there the answer is "with difficulty". It is not an easy thing for me
    to make myself believe. But it is what I want to believe, and on my
    best days I do.

                                                   |[306]back to contents|
    ______________________________________________________________________

    [307]PAUL STEINHARDT
    Albert Einstein Professor of Physics, Princeton University.
    [steinhardt100.jpg] I believe that our universe is not accidental, but
    I cannot prove it.
    Historically, most physicists have shared this point-of-view. For
    centuries, most of us have believed that the universe is governed by a
    simple set of physical laws that are the same everywhere and that
    these laws derive from a simple unified theory.
    However, in the last few years, an increasing number of my most
    respected colleagues have become enamored with the anthropic
    principle--the idea that there is an enormous multiplicity of
    universes with widely different physical properties and the properties
    of our particular observable universe arise from pure accident. The
    only special feature of our universe is that its properties are
    compatible with the evolution of intelligent life. The change in
    attitude is motivated, in part, by the failure to date to find a
    unified theory that predicts our universe as the unique possibility.
    According to some recent calculations, the current best hope for a
    unified theory--superstring theory--allows an exponentially large
    number of different universes, most of which look nothing like our
    own. String theorists have turned to the anthropic principle for
    salvation.
    Frankly, I view this as an act of desperation. I don't have much
    patience for the anthropic principle. I think the concept is, at
    heart, non-scientific. A proper scientific theory is based on testable
    assumptions and is judged by its predictive power. The anthropic
    principle makes an enormous number of assumptions--regarding the
    existence of multiple universes, a random creation process,
    probability distributions that determine the likelihood of different
    features, etc.--none of which are testable because they entail
    hypothetical regions of spacetime that are forever beyond the reach of
    observation. As for predictions, there are very few, if any. In the
    case of string theory, the principle is invoked only to explain known
    observations, not to predict new ones. (In other versions of the
    anthropic principle where predictions are made, the predictions have
    proven to be wrong. Some physicists cite the recent evidence for a
    cosmological constant as having anticipated by anthropic argument;
    however, the observed value does not agree with the anthropically
    predicted value.)
    I find the desperation especially unwarranted since I see no evidence
    that our universe arose by a random process. Quite the contrary,
    recent observations and experiments suggest that our universe is
    extremely simple. The distribution of matter and energy is remarkably
    uniform. The hierarchy of complex structures ranging from galaxy
    clusters to subnuclear particles can all be described in terms of a
    few dozen elementary constituents and less than a handful of forces,
    all related by simple symmetries. A simple universe demands a simple
    explanation. Why do we need to postulate an infinite number of
    universes with all sorts of different properties just to explain our
    one?
    Of course, my colleagues and I are anxious for further reductionism.
    But I view the current failure of string theory to find a unique
    universe simply as a sign that our understanding of string theory is
    still immature (or perhaps that string theory is wrong). Decades from
    now, I hope that physicists will be pursuing once again their dreams
    of a truly scientific "final theory" and will look back at the current
    anthropic craze as millennial madness.

                                                   |[308]back to contents|
    ______________________________________________________________________

    [309]ELLEN WINNER
    Psychologist, Boston College; Author, Gifted Children
    [winner100.jpg] Sometimes our folk theories are correct: Parents do
    shape their children.
    According to our folk theories of child development, parents are a
    major and inescapable influence on their children. Most people believe
    that how parents treat their children, as well as the values parents
    impart, leaves a strong and indelible imprint. Yet some psychologists
    have countered this view and have pointed to the finding that on paper
    and pencil personality tests, parents and children (especially parents
    and their adopted children) are often not mirrors of one another.
    Psychologists have not yet proven to skeptics that parents have a
    strong influence on their children, but I am convinced that we will be
    able to demonstrate this.

    To begin with, producing children whose personality mirrors ones own
    is hardly the only way for parents to influence their children. We
    should not expect children to mirror their parents' personalities
    since they may often develop personalities in reaction to their
    parents. If you react against something, that something is having an
    influence on you. A depressed mother may engender a solicitous child.
    An impulsive parent may engender a careful child intent on not
    repeating the parent's errors.

    Another problem with only using personality tests to examine parental
    influence is that these tests ignore political, social, and moral
    values and aesthetic tastes. I believe that children end up with much
    of their parents' values and tastes. We know that one of the best
    predictors of how people vote is how their parents vote. Parental
    values such as generosity, ambition, materialism, anti-materialism,
    etc have powerful effects on children. True, children may react
    against their parents' values. Materialistic parents have bred hippie
    children. But how many of these children eventually shed their hippie
    clothing and go to Wall Street? All too many.

    If parents had no influence on their children, what is it that keeps
    psychoanalysts in business? Some children hate their parents. Some
    feel rage at their parents. Some feel their parents make them feel
    guilty. Some feel damaged by their parents. Some feel they are
    carrying on their parents' traditions. Some feel they owe their
    character strength to their parents. I fervently doubt that these
    feelings are merely epiphenomenal.

    Judith Rich Harris, in The Nurture Assumption, took the position that
    parents have essentially no influence on their children besides
    passing on their genes and choosing their children's peer group. Steve
    Pinker said that the publication of this book was a landmark event in
    the history of psychology. I disagree with Harris' extreme claims and
    Pinker's endorsement.

    To demonstrate parents' effects on their children, we will need better
    measures than quantitative short answer paper and pencil personality
    tests, and we will need to recognize that parents may influence their
    children to become like them or to become unlike them. One way to
    start is to develop a set of predictions about how parents shape their
    children (either to become like or unlike them), interview people
    about how they believe they have been shaped by their parents, and
    look for whether the patterns found fit the predictions. A stronger
    way is to look at adult adopted children, after the tumultuous
    adolescent years, and look at the extent to which these children
    either share their adoptive parents' values or have reacted against
    those values. Either way (sharing or reacting against), there is a
    powerful parental influence. The way to disprove my claim would be to
    show no systematic positive or negative relationships between parents
    and adoptive children. The belief that parents shape their children is
    part of our folk theory. Sometimes our folk theories are correct.

                                                   |[310]back to contents|
    ______________________________________________________________________

    [311]BENOIT MANDELBROT
    Mathematician, Yale University; Author, The Fractal Geometry of Nature

    [mandelbrot100.jpg] Wandering through the frontiers of the sciences,
    and the arts, I have always trusted the eye while leaving aside the
    issues that elude it. It can mislead--of course--therefore I check
    endlessly and never rush to print.
    Meanwhile, for over fifty years, I have watched as some disciplines
    exhaust the "top down" problems they know how to tackle. So they
    wander around seeking totally new patterns in a dark and deep mess,
    where an unlit lamp is of little help.

    But the eye can continually be trained and, long ago, I have vowed to
    follow it, therefore work "from the bottom up." Like the Antaeus of
    Greek myth, I gather strength and persist by often touching the earth.

    A few of the truths the eye told me have been disproven. Let it be.
    Others have been confirmed by enormous and fruitful effort, and then
    blossomed, one being the four thirds conjecture in Brownian motion.
    Many others remain, one being the MLC conjecture about the Mandelbrot
    set, in which I believe for no other reason than trust in the eye.

                                                   |[312]back to contents|
    ______________________________________________________________________

    [313]STANISLAS DEHAENE
    Cognitive Neuropsychology Researcher, Institut National de la Santé,
    Paris; Author, The Number Sense
    [dehane100.jpg] I believe (but cannot prove) that we vastly
    underestimate the differences that set the human brain apart from the
    brains of other primates.

    Certainly, no one can deny that there are important similarities in
    the overall layout of the human brain and, say, the macaque monkey
    brain. Our primary sensory and motor cortices are organized in similar
    ways. Even in higher brain areas, homologies can be found. In the
    parietal lobe, using brain-imaging methods, my lab has observed
    plausible human counterparts to several areas of the macaque brain,
    involved in eye movement, hand gestures, and even number processing.

    Yet I fear that those early successes in drawing human-monkey
    homologies tend to mask other massive differences. If we compare the
    primary visual areas of macaques and humans, there is already a
    two-fold difference in surface area, but in parietal and frontal
    areas, a twenty-to fifty-fold increase is found. Even such a massive
    distortion may not suffice to "align" the macaque and human brain.
    Many of us suspect that, in regions such as the prefrontal and
    inferior parietal cortices, the changes are so dramatic that they may
    amount to the addition of new brain areas.

    At a more microscopic level, it is already known that there is a new
    type of neuron which is found in the anterior cingulate region of
    humans and great apes, but not in other primates. These "spindle
    cells" send connections throughout the cortex, and thus contribute to
    a massive increase in long-distance connectivity in the human brain.
    Indeed, the change in relative white matter volume is perhaps what is
    most dramatic about the human brain.

    I believe that these surface and connectivity changes, although they
    are in many cases quantitative, have brought about a qualitative
    revolution in brain function:

      Breaking the brain's modularity.

    Jean-Pierre Changeux and I have proposed that the increased
    connectivity of the human brain gives access to a new mode of brain
    function, characterized by a very flexible communication between
    distant brain areas. We may possess roughly the same list of
    specialized cerebral processors as our primate ancestors. However, I
    speculate that what might be unique about the human brain is its
    capacity to access the information inside each processor, and make it
    available to almost any other processor through long-distance
    connections. I believe that we humans have a much more developed
    conscious workspace--a set of brain areas that can fluidly exchange
    signals, thus allowing us to internally manipulate information and to
    perform new mental syntheses. Using the workspace's long-distance
    connections, we can mobilize, in a top-down manner, essentially any
    brain area and bring it into consciousness.

      Spontaneous activity and the autonomy of consciousness.

    Once the internal connectivity of a system exceeds a threshold, it
    begins to be dominated by self-sustained, reverberating states of
    activity. I believe that the human workspace system has passed this
    threshold, and has gained a considerable autonomy relative to the
    outside world. The human brain is much less at the mercy of signals
    from the outside world. Its activity never ceases to reverberate from
    area to area, thus generating a highly structured spontaneous flow of
    thoughts that we project on the outside world.

    Of course, spontaneous brain activity is present in all species, but
    if I am correct we will discover that it is both more evident and more
    structured in the human brain, at least in higher cortical areas where
    "workspace" neurons with long-distance axons are denser. Furthermore,
    if human brain activity can be detached from outside stimulation, we
    will need to find new paradigms to study it, because bombarding the
    human brain with stimuli, as we do in most brain-imaging experiments,
    will not suffice. There is already some evidence for this statement:
    by directly comparing fMRI activations evoked by the same visual
    stimuli in humans and macaques, Guy Orban and his colleagues in Leuven
    have found that prefrontal cortex activity is five times larger in
    macaques than in humans. In their own words, "there may be more
    volitional control over visual processing in humans than in monkeys".

      The profound influence of culture on the human brain.

    The human species is also unique in its ability to expand its
    functionality by inventing new cultural tools. Writing, arithmetic,
    science, are all very recent inventions--our brains did not have time
    to evolve for them, but I speculate that they were made possible
    because we can mobilize our old areas in novel ways. When we learn to
    read, we "recycle" a specific region of our visual system, which has
    become known as the "visual word form area", for the purpose of
    recognizing strings of letters and connecting them to language areas.
    When we learn Arabic numerals, likewise, we build a circuit to quickly
    convert those shapes into quantities, a fast connection from bilateral
    visual areas to the parietal quantity area. Even an invention as
    elementary as finger counting changes dramatically our cognitive
    abilities: Amazonian people that have not invented counting are unable
    to make exact calculations as simple as 6-2.

    Crucially, this "cultural recycling" implies that whenever we look at
    a human brain, the functional architecture that we see results from a
    complex mixture of biological and cultural constraints. Education is
    likely to greatly increase the gap between the human brain and that of
    our primate cousins. Virtually all human brain imaging experiments
    today are performed on highly literate volunteers--and therefore,
    presumably, highly transformed brains. To better understand the
    differences between the human brain and the monkey brain, we will need
    to invent new methods, both to decipher the organization of the baby
    brain prior to education, and to study of how it changes with
    education.

                                                   |[314]back to contents|
    ______________________________________________________________________

    [315]TOR NØRRETRANDERS
    Science Writer; Consultant; Lecturer, Copenhagen; Author, The User
    Illusion
    [norretranders100.jpg] I believe in belief--or rather: I have faith in
    having faith. Yet, I am an atheist (or a "bright" as some would have
    it). How can that be?
    It is important to have faith, but not necessarily in God. Faith is
    important far outside the realm of religion: having faith in other
    people, in oneself, in the world, in the existence of truth, justice
    and beauty. There is a continuum of faith, from the basic everyday
    trust in others to the grand devotion to divine entities.
    Recent discoveries in behavioural sciences, such as experimental
    economics and game theory, shows that it is a common human attitude
    towards the world to have faith. It is vital in human interactions;
    and it is no coincidence that the importance of anchoring behaviour in
    riskful trust is stressed in worlds as far apart as Søren
    Kierkegaard's existentialist christianity and modern theories of
    bargaining behaviour in economic interactions. Both stress the
    importance of the inner, subjective conviction as the basis for
    actions, the feeling of an inner glow.
    One could say that modern behavioral science is re-discovering the
    importance of faith that has been known to religions for a long time.
    And I would argue that this re-discovery shows us that the activity of
    having faith can be decoupled from the belief in divine entities.
    So here is what I have faith in: We have a hand backing us, not as a
    divine foresight or control, but in the very simple and concrete sense
    that we are all survivors. We are all the result of a very long line
    of survivors who survived long enough to have offspring. Amoeba,
    rodents and mammals. We can therefore have confidence that we are
    experts in survival. We have a wisdom inside, inherited from millions
    of generations of animals and humans, a knowledge of how to go about
    life. That does not in any way imply foresight or planning ahead on
    our behalf. It only implies that we have a reason to trust out ability
    to deal with whatever challenges we meet. We have inherited such an
    ability.
    Therefore, we can trust each other, ourselves and life itself. We have
    no guarantee or promises for eternal life, not at all. The enigma of
    death is still there, ineradicable.
    But we a reason to have confidence in ourselves. The basic fact that
    we are still here--despite snakes, stupidity and nuclear
    weapons--gives us reason to have confidence in ourselves and each
    other, to trust others and to trust life. To have faith.
    Because we are here, we have reason for having faith in having faith.

                                                   |[316]back to contents|
    ______________________________________________________________________

    [317]STEVE GIDDINGS
    Theoretical Physicist, University of California, Santa Barbara
    [giddings100.jpg] I believe that black holes do not destroy
    information, as Hawking argued long ago, and the reason is that strong
    gravitational effects undermine the statement that degrees of freedom
    inside and outside the black hole are independent.
    On the first point, I am far from alone; many string theorists and
    others now believe that black holes don't destroy information, and
    thus don't violate quantum mechanics. Hawking himself recently
    announced that he believes this, and has conceded a famous bet, but
    has not yet published the work giving a sharp statement where his
    original logic went wrong.
    The second point I believe, but cannot yet prove to the point of
    convincing many of my colleagues. While many believe that Hawking was
    wrong, there is a lot of dissent over where exactly his calculation
    fails, and none of the arguments previously presented have sharply
    identified this point of failure. If black holes emit information
    instead of destroying it, this probably comes from a breakdown of
    locality. Lowe, Polchinski, Susskind, Thorlacius, and Uglum have
    argued that the mechanism for locality violation involves formation of
    long strings. Horowitz and Maldacena have argued that the singularity
    at the center of a black hole must be a unique state, in effect
    squeezing information out in a ghostly way. And others have made other
    suggestions.
    But I believe, and my former student Lippert and I have published
    arguments, that the breakdown of locality that invalidates Hawking's
    work involves strong gravitational physics that makes it inconsistent
    to think of separate and independent degrees of freedom inside and
    outside the black hole. The assumption that these degrees of freedom
    are separate is fundamental to Hawking's argument. Our argument for
    where it fails has a satisfying generality that mirrors the generality
    of Hawking's original work--neither depends on the specifics of what
    kind of matter exists in the theory.
    We base our argument on a principle we call the locality bound. This
    is a criterion for when physical degrees of freedom can be independent
    (in technical language, described by vanishing of commutators of
    corresponding operators). Roughly, a degree of freedom corresponding
    to a particle at position x with momentum p and another at y with
    momentum q will be independent only if the separation x-y is large
    enough that they are outside of a black hole that would form from
    their mutual energy. I believe this is the beginning of a general
    criterion (which will ultimately more precisely formulated) for when
    locality breaks down in physics. This could be the beginning of a
    deeper understanding of holography. And, it should be relevant to
    black hole physics because of the large relative energies of the
    Hawking radiation and degrees of freedom falling into a black hole.
    But this is not fully proven. Yet.

                                                   |[318]back to contents|
    ______________________________________________________________________

    [319]HOWARD RHEINGOLD
    Communications Expert; Author, Smart Mobs
    [rheingold100.jpg] I believe that we humans, who know so much about
    cosmology and immunology, lack a framework for thinking about why and
    how humans cooperate. I believe that part of the reason for this is an
    old story we tell ourselves about the world:  Businesses and nations
    succeed by competing well. Biology is a war, where only the fit
    survive. Politics is about winning. Markets grow solely from
    self-interest. Rooted in the zeitgeist of Adam Smith's and Charles
    Darwin's eras, the scientific, social, economic, political stories of
    the 19th and 20th centuries overwhelmingly emphasized the role of
    competition as a driver of evolution, progress, commerce, society.

    I believe that the outlines of a new narrative are becoming visible--a
    story in which cooperative arrangements, interdependencies, and
    collective action play a more prominent role and the essential (but
    not all-powerful) story of competition and survival of the fittest
    shrinks just a bit.
    Although new knowledge in biology about the evolution of altruistic
    behavior and the role of symbiotic relationships, new understandings
    of economic behavior derived from experiments in game theory,
    neuroeconomic research, sociological investigations of institutions
    for collective action, computation-enabled technologies such as grid
    computing, mesh networks, and online markets all provide important
    clues, I don't believe anyone is likely to formulate an algorithm or
    recipe for human cooperation. I suspect that the complex
    interdependencies of human thought, behavior and culture entails an
    equivalent to the limits Heisenberg found to physics and Gödel
    established for mathematics.
    I believe that more knowledge than what we have now, together with a
    conceptual framework that is neither reductionistic nor theological,
    could lead to better-designed economic and political policies and
    institutions. Institutional and conceptual barriers to mounting such
    an effort are as formidable as the methodological barriers. I am
    reminded of Doug Engelbart's problem in the 1950s. He couldn't
    convince computer engineers, librarians, public policy analysts that
    computing machinery could be used to augment human thinking, as well
    as performing scientific calculation and business data processing.
    Nobody and no institution had ever thought about computing machinery
    that way, and older ways of thinking about what machines could be
    designed to do were inadequate. Engelbart had to create "A Framework
    for Augmenting Human Intellect" before the various hardware, software,
    and human interface designers could create the first personal
    computers and networks.
    By necessity, useful new understandings of how humans cooperate and
    fail to cooperate is an interdisciplinary task. I don't believe that
    the obvious importance of such an effort guarantees that it will be
    successfully accomplished. All our institutions for gathering and
    validating knowledge--universities, corporate research laboratories,
    and foundations--reward and support specialization.

                                                   |[320]back to contents|
    ______________________________________________________________________

    [321]LEO CHALUPA
    Ophthalmologist and Neurobiologist, University of California, Davis
    [chalupa100.jpg] Here are three of my unproven beliefs:

    (i) The human brain is the most complex entity in the known universe;

    (ii) With this marvelous product of evolution we will be successful in
    eventually discovering all that there is to discover about the
    physical world, provided of course, that some catastrophic event
    doesn't terminate our species; and

    (iii) Science provides the best means to attain this ultimate goal.

    When the scientific endeavor is considered in relation to the obvious
    limitations of the human brain, the knowledge we have gained in all
    fields to date is astonishing. Consider the well-documented
    variability in the functional properties of neurons. When recordings
    are made from a single cell--for instance in the visual cortex to a
    flashing spot of light--one can't help but be amazed by the
    trial-to-trial variations in the resulting responses.

    On one trial this simple stimulus might elicit a high frequency burst
    of discharges, while on the next trial there could be just a hint of a
    response. The same thing is apparent when EEG recordings are made from
    the human brain. Brain waves change in frequency and amplitude in
    seemingly random fashion even when the subject is lying in a prone
    position without any variations in behavior or the environment.
    And such variability is also evident when one does brain imaging; the
    pretty pictures seen in publications are averages of many trials that
    have been "massaged" by various computer programs.

    So how does the brain do it? How can it function as effectively as it
    does given the "noise" inherent in the system? I don't have a good
    answer, and neither does anyone else, in spite of the papers that have
    been published on this problem. But in line with the second of the
    three beliefs I have listed above, I am certain that someday this
    question will be answered in a definitive manner.

                                                   |[322]back to contents|
    ______________________________________________________________________

    [323]CARLO ROVELLI
    Physicist; Institut Universitaire de France & University of the
    Mediterraneum; Author, Quantum Gravity
    [rovelli100.jpg] I am convinced, but cannot prove, that time does not
    exist. I mean that I am convinced that there is a consistent way of
    thinking about nature, that makes no use of the notions of space and
    time at the fundamental level. And that this way of thinking will turn
    out to be the useful and convincing one.

    I think that the notions of space and time will turn out to be useful
    only within some approximation. They are similar to a notion like "the
    surface of the water" which looses meaning when we describe the
    dynamics of the individual atoms forming water and air: if we look at
    very small scale, there isn't really any actual surface down there. I
    am convinced space and time are like the surface of the water:
    convenient macroscopic approximations, flimsy but illusory and
    insufficient screens that our mind uses to organize reality.

    In particular, I am convinced that time is an artifact of the
    approximation in which we disregard the large majority of the degrees
    of freedom of reality. Thus "time" is just the reflection of our
    ignorance.

    I am also convinced, but cannot prove, that there are no objects, but
    only relations. By this I mean that I am convinced that there is a
    consistent way of thinking about nature, that refers only to
    interactions between systems and not to states or changes of
    individual systems. I am convinced that this way of thinking nature
    will end up to be the useful and natural one in physics.

    Beliefs that one cannot prove are often wrong, as proven by the fact
    that this Edge list contains contradictory beliefs. But they are
    essential in science and often healthy. Here is a good example from 25
    centuries ago: Socrates, in Plato's Phaedon says:

      "... seems to me very hard to prove, and I think I wouldn't be able
      to prove it ... but I am convinced ... that the Earth is
      spherical."

    Finally, I am also convinced, but cannot prove, that we humans have an
    instinct to collaborate, and that we have rational reasons for
    collaborating. I am convinced that ultimately this rationality and
    this instinct of collaboration will prevail over the shortsighted
    egoistic and aggressive instinct that produces exploitation and war.
    Rationality and instinct of collaboration have already given us large
    regions and long periods of peace and prosperity. Ultimately, they
    will lead us to a planet without countries, without wars, without
    patriotism, without religions, without poverty, where we will be able
    to share the world. Actually, maybe I am not sure I truly believe that
    I believe this; but I do want to believe that I believe this.

                                                   |[324]back to contents|
    ______________________________________________________________________

    [325]JOHN McCARTHY
    Computer Scientist; Artificial Intelligence Pioneer, Stanford
    University
    [mccarthy100.jpg]

    I think, as did Gödel, that the continuum hypothesis is false. No-one
    will ever prove it false from the presently accepted axioms of set
    theory. Chris Freiling's proposed new (1986) axioms prove it false,
    but they are not regarded as intuitive.

    I think human-level artificial intelligence will be achieved.

                                                   |[326]back to contents|
    ______________________________________________________________________

    [327]JAMES O'DONNELL
    Classicist; Cultural Historian; Provost, Georgetown University;
    Author, Avatars of the Word

    [odonnell100.jpg] What do I believe is true even though I cannot prove
    it? This question has a double edge and needs two answers.

    First, and most simply: "everything". On a strict Popperian reading,
    all the things I "know" are only propositions that I have not yet
    falsified. They are best estimates, hypotheses that, so far, make
    sense of all the data that I possess. I cannot prove that my parents
    were married on a certain day in a certain year, but I claim to "know"
    that date quite confidently. Sure, there are documents, but in fact in
    their case there are different documents that present two different
    dates, and I recall the story my mother told to explain that and I
    believe it, but I cannot "prove" that I am right. I also know Newton's
    Laws and indeed believe them, but I also now know their limitations
    and imprecisions and suspect that more surprises may lurk in the
    future.

    But that's a generic answer and not much in the forward-looking and
    optimistic spirit that characterizes Edge. So let me propose this
    challenge to practitioners of my own historical craft. I believe that
    there are in principle better descriptions and explanations for the
    development and sequence of human affairs than human historians are
    capable of providing. We draw our data mainly from witnesses who share
    our scale of being, our mortality, and for that matter our viewpoint.
    And so we explain history in terms of human choices and the behavior
    of organized social units. The rise of Christianity or the Norman
    Conquest seem to us to be events we can explain and we explain them in
    human-scale terms. But it cannot be excluded or disproved that events
    can be better explained on a much larger time scale or a much smaller
    scale of behavior. An outright materialist could argue that all my
    acts, from the day of my birth, have been a determined result of
    genetics and environment. It was fashionable a generation ago to argue
    a Freudian grounding for Luther's revolt, but in principle it could as
    easily be true and, if we could know it, more persuasive to
    demonstrate that his acts were determined a the molecular and
    submolecular level.

    The problem with such a notion is, of course, that we are very far
    from being able to outline such a theory, much less make it
    persuasive, much less make it something that another human being could
    comprehend. Understanding even one other person's life at such
    microscopic detail would take much more than one lifetime.

    So what is to be done? Of course historians will constantly struggle
    to improve their techniques and tools. The advance of dendrochronology
    (dating wood by the tree rings, and consequently dating buildings and
    other artifacts far more accurately than ever before) can stand as one
    example of the way in which technological advance can tell us things
    we never knew before. But we will also continue to write and to read
    stories in the old style, because stories are the way human beings
    most naturally make sense of their world. An awareness of the powerful
    possibility of whole other orders of possible description and
    explanation, however, should at least teach us some humility and give
    us some thoughtful pause when we are tempted to insist too strongly on
    one version of history--the one we happen to be persuaded is true.
    Even a Popperian can see that this kind of intuition can have
    beneficial effect.

                                                   |[328]back to contents|
    ______________________________________________________________________

    [329]PAMELA McCORDUCK
    Writer; Author, Machines Who Think

    [mccorduck100.gif]

    Although I can't prove it, I believe that thanks to new kinds of
    social modeling, that take into account individual motives as well as
    group goals, we will soon grasp in a deep way how collective human
    behavior works, whether it's action by small groups or by nations. Any
    predictive power this understanding has will be useful, especially
    with regard to unexpected outcomes and even unintended consequences.
    But it will not be infallible, because the complexity of such behavior
    makes exact prediction impossible.

                                                   |[330]back to contents|
    ______________________________________________________________________

    [331]MARTIN REES
    Cosmologist, Cambridge University; UK Astronomer Royal; Author, Our
    Final Hour

    [rees100.jpg] I believe that intelligent life may presently be unique
    to our Earth, but that, even so, it has the potential to spread
    through the galaxy and beyond--indeed, the emergence of complexity
    could still be near its beginning. If SETI searches fail, that would
    not render life a cosmic sideshow Indeed, it would be a boost to our
    cosmic self-esteem: terrestrial life, and its fate, would become a
    matter of cosmic significance. Even if intelligence is now unique to
    Earth, there's enough time lying ahead for it to spread through the
    entire Galaxy, evolving into a teeming complexity far beyond what we
    can even conceive.

    There's an unthinking tendency to imagine that humans will be around
    in 6 billion years, watching the Sun flare up and die. But the forms
    of life and intelligence that have by then emerged would surely be as
    different from us as we are from a bacterium. That conclusion would
    follow even if future evolution proceeded at the rate at which new
    species have emerged over the 3 or 4 billion years of the geological
    past. But post-human evolution (whether of organic species or of
    artefacts) will proceed far faster than the changes that led to
    emergence, because it will be intelligently directed rather than
    being--like pre-human evolution--the gradual outcome of Darwinian
    natural selection. Changes will drastically accelerate in the present
    century--through intentional genetic modifications, targeted drugs,
    perhaps even silicon implants in to the brain. Humanity may not
    persist as a single species for more than a few centuries--especially
    if communities have by then become established away from the earth.

    But a few centuries is still just a millionth of the Sun's future
    lifetime--and the entire universe probably has a longer future still.
    The remote future is squarely in the realm of science fiction.
    Advanced intelligences billions of years hence might even create new
    universes. Perhaps they'll be able to choose what physical laws
    prevail in their creations. Perhaps these beings could achieve the
    computational capability to simulate a universe as complex as the one
    we perceive ourselves to be in.

    My belief may remain unprovable for billions of years. It could be
    falsified sooner--for instance, we (or our immediate post-human
    descendents) may develop theories that reveal inherent limits to
    complexity. But it's a substitute for religious belief, and I hope
    it's true.

                                                   |[332]back to contents|
    ______________________________________________________________________

    [333]CAROLYN PORCO
    Planetary Scientist; Leader, Cassini Imaging Team; Director, CICLOPS,
    Space Science Institute, Boulder

    [porco100.jpg] This is a treacherous question to ask, and a trivial
    one to answer. Treacherous because the shoals between the written
    lines can be navigated by some to the conclusion that truth and
    religious belief develop by the same means and are therefore
    equivalent. To those unfamiliar with the process by which scientific
    hunches and hypotheses are advanced to the level of verifiable fact,
    and the exacting standards applied in that process, the impression may
    be left that the work of the scientist is no different than that of
    the prophet or the priest.

    Of course, nothing could be further from reality.

    The whole scientific method relies on the deliberate, high
    magnification scrutiny and criticism by other scientists of any
    mechanisms proposed by any individual to explain the natural world. No
    matter how fervently a scientist may "believe'"something to be true,
    and unlike religious dogma, his or her belief is not accepted as a
    true description or even approximation of reality until it passes
    every test conceivable, executable and reproducible. Nature is the
    final arbiter, and great minds are great only in so far as they can
    intuit the way nature works and are shown by subsequent examination
    and proof to be right.

    With that preamble out of the way, I can say that for me personally,
    this is a trivial question to answer. Though no one has yet shown that
    life of any kind, other than Earthly life, exists in the cosmos, I
    firmly believe that it does. My justification for this belief is a
    commonly used one, with no strenuous exertion of the intellect or
    suspension of disbelief required.

    Our reconstruction of early solar system history, and the chronology
    of events that led to the origin of the Earth and moon and the
    subsequent development of life on our planet, informs us that
    self-replicating organisms originated from inanimate materials in a
    very narrow window of time. The tail end of the accretion of the
    planets--a period known as "the heavy bombardment"--ended about 3.8
    billion years ago, approximately 800 million years after the Earth
    formed. This is the time of formation and solidification of the big
    flooded impact basins we readily see on the surface of the Moon, and
    the time when the last large catastrophe-producing impacts also
    occurred on the Earth. In other words, the terrestrial surface
    environment didn't settle down and become conducive to the development
    of fragile living organisms until nearly a billion years had gone by.

    However, the first appearance of life forms on the Earth, the oldest
    fossils we have discovered so far, occurred shortly after that: around
    3.5 billion years ago or even earlier. The interval in between--only
    300 millions years and less than the time represented by the rock
    layers in the walls of the Grand Canyon--is the proverbial blink of
    the cosmic eye. Despite the enormous complexity of even the simplest
    biological forms and processes, and the undoubtedly lengthy and
    complicated chain of chemical events that must have occurred to evolve
    animated molecular structures from inanimate atoms, it seems an
    inevitable conclusion that Earthly life developed very quickly and as
    soon as the coast was clear long enough to do so.

    Evidence is gathering that the events that created the solar system
    and the Earth, driven predominantly by gravity, are common and
    pervasive in our galaxy and, by inductive reasoning, in galaxies
    throughout the cosmos. The cosmos is very, very big. Consider the
    overwhelming numbers of galaxies in the visible cosmos alone and all
    the Sun-like stars in those galaxies and the number of habitable
    planets likely to be orbiting those stars and the ease with which life
    developed on our own habitable planet, and it becomes increasingly
    unavoidable that life is itself a fundamental feature of our universe
    ... along with dark matter, supernovae, and black holes.

    I believe we are not alone. But it doesn't matter what I think because
    I can't prove it. It is so beguiling a question, though, that
    humankind is presently and actively seeking the answer. The search for
    life and so-called "habitable zones" is becoming increasingly the
    focus of our planetary explorations, and it may in fact transpire one
    day that we discover life forms under the ice on some moon orbiting
    Jupiter or Saturn, or decode the intelligible signals of an advanced,
    unreachably distant, alien organism. That will be a singular day
    indeed. I only hope I'm still around when it happens.

                                                   |[334]back to contents|
    ______________________________________________________________________

    [335]CHARLES SIMONYI
    Computer Scientist, Intentional Software Corporation; formerly Chief
    Architect, Microsoft Corporation

    [simonyi100.jpg] I believe that we are writing software the wrong way.
    There are sound evolutionary reasons for why we are doing what we are
    doing--that we can call the "programming the problem in a computer
    language" paradigm, but the incredible success of Moore's law blinded
    us to being stuck in what is probably an evolutionary backwater. There
    are many warning signs. Computers are demonstrably ten thousand times
    better than not so long ago. Yet we are not seeing their services
    improving at the same rate (with some exceptions--for example games
    and internet searches.) On an absolute scale, a business or
    administration problem that would take maybe one hundred pages to
    describe precisely, will take millions of dollars to program for a
    computer and often the program will not work. Recently a smaller
    airline came to a standstill due to a problem in crew scheduling
    software--raising the ire of Congress, not to mention their customers.
    My laptop could store 200 pages of text (1/2 megabytes) for each and
    every crew member at this airline just in its fast memory and hundred
    times more (a veritable encyclopedia of 20,000 pages) for each person
    on its hard disk. Of course for a schedule we would need maybe one or
    two--or at most ten pages per person. Even with all the rules--the
    laws, the union contracts, the local, state, federal taxes, the duty
    time limitations, the FAA regulations on crew certification; is there
    anyone who believes that the problem is not simple in terms of
    computing? We need to store and process at the maximum 10 pages per
    person where we have capacity for two thousand times more in one cheap
    laptop! Of course the problem is complex in terms of the problem
    domain--but not shockingly so. I would estimate that all the rules
    possibly relevant to aircraft crew scheduling are expressible in less
    than a thousand pages--or 1/2 of one percent of the fast memory.
    Software is surely the bottleneck on the high-tech horn of plenty. The
    scheduling program for the airline takes many thousand times more
    memory than what I believe it should be. Hence the software represents
    complexity that is many thousand times greater than what I believe the
    problem is--no wonder that some planes are assigned three pilots by
    the software while the others can't fly because the copilot is not
    scheduled. Note that the cost of the memory is not the issue--we could
    afford that waste. But the use of so much memory for software is an
    indication of some complexity inflation that occurs during programming
    that is the real bottleneck.
    What is going on? I like to use cryptography as the metaphor. As we
    know, in cryptography we take a message and we combine it with a key
    using a difficult-to-invert function to get the code. Programmers
    using today's paradigm start from a problem statement, for example
    that a Boeing 767 requires a pilot, a copilot, and seven cabin crew
    with various certification requirements for each--and combine this
    with their knowledge of computer science and software
    engineering--that is how this rule can be encoded in computer language
    and turned into an algorithm. This act of combining is the programming
    process, the result of which is called the source code. Now,
    programming is well known to be a difficult-to-invert function,
    perhaps not to cryptography's standards, but one can joke about the
    possibility of the airline being able to keep their proprietary
    scheduling rules secret by publishing the source code for the
    implementation since no one could figure out what the rules were--or
    really whether the code had to do with scheduling or spare parts
    inventory--by studying the source code, it can be that obscure.
    The amazing thing is that today it is the source code--that is the
    encrypted problem--which is the artifact all of software engineering
    is focusing on. To add insult to the injury, the "encryption", that is
    programming, is done manually which means high costs, low throughput
    and high error rates. In contrast with software maintenance, when the
    General realizes that he is about to send a wrong encrypted message,
    no one would think of editing the code after the encryption or "fixing
    the code"; instead the clear text would be first edited and then this
    improved message would be re-encrypted at computer speeds and computer
    accuracy. In other words the message may be wrong, but it won't be
    wrong because of the encryption and it is easily fixed.
    We see that the complexity inflation comes from encoding. The problem
    statement above is obviously oversimplified, but remember that we used
    just two lines from our realistic budget of a thousand pages and we
    haven't even used the aviation jargon which can make these statements
    even more compact and more precise. But once these statements are
    viewed through the funhouse mirror of software coding, it becomes all
    but unrecognizable: thousand times fatter, disjointed, foreign. And as
    any manual product, it will have many flaws--beyond the errors in the
    rules themselves.
    What can be done? Follow the metaphor. First, refocus on recording the
    problem statement--the "cleartext" in our metaphor. This is not a
    program in any sense of the word--it is just a straightforward
    recording of the subject matter experts' contributions using their own
    terms-of-art, their jargon, their own notations. Next, empower the
    programmers to program not the problem itself, but to express their
    software engineering expertise and decisions as a computer code for
    the encoder that takes the recorded problem statement and generates
    the code from it. This is called generative programming and I believe
    it is the future of software.

                                                   |[336]back to contents|
    ______________________________________________________________________

    [337]CHRIS W. ANDERSON
    Editor-In-Chief, Wired
    [andersonw100.jpg] The Intelligent Design movement has opened my eyes.
    I realize that although I believe that evolution explains why the
    living world is the way it is, I can't actually prove it. At least not
    to the satisfaction of the ID folk, who seem to require that every
    example of extraordinary complexity and clever plumbing in nature be
    fully traced back (not just traceable back) along an evolutionary tree
    to prove that it wasn't directed by an invisible hand. If the
    scientific community won't do that, then the arguments goes that they
    must accept a large red "theory" stamp placed on the evolution
    textbooks and that alternative theories, such as "guided" evolution
    and creationism, be taught alongside.
    So, by this standard, virtually everything I believe in must now fall
    under the shadow of unproveability. Most importantly, this includes
    the belief that democracy, capitalism and other market-driven systems
    (including evolution!) are better than their alternatives. Indeed, I
    suppose I should now refer to them as the "theory of democracy" and
    the "theory of capitalism", to join the theory of evolution, and
    accept the teaching of living Marxism and fascism as alternatives in
    high schools.

                                                   |[338]back to contents|
    ______________________________________________________________________

    [339]VERENA HUBER-DYSON
    Mathematician, Emeritus Professor, Dept of Philosophy, University of
    Calgary; Author, Gödel's Theorems
    [huber-dyson100.jpg] Most of what I believe I cannot prove, simply for
    lack of time and energy; truths that I'd claim to know because they
    have been proved by others. That is how inextricably our beliefs are
    tied up with labors accomplished by fellow beings. And then there are
    mathematical truths that we now know are not provable. These phenomena
    have become favorites with the media but can only be made sense of by
    a serious scrutiny of the idea of mathematical truth and a specific
    articulation of a proof-concept,
    But running across Esther's contribution I came up with a catchy
    response:

    I believe in the creative power of boredom.
    Or, to put it into the form suggested by the Edge question:
    I believe that, no matter how relentlessly we overfeed our young with
    prepackaged interactive entertainments, before long they will break
    out and invent their own amusements. I know from experience; boredom
    drove me into mathematics during my preteens. But I cannot prove it,
    till it actually happens. Probably in less than a generation kids will
    be amusing themselves and each other in ways that we never dreamt of.
    Such is my belief in human nature, in the resilience of its good
    sense.

    Here is an observation from mathematical practice. By now the concept
    of an algorithm, well- defined, is widely hailed as the way to solve
    problems, more precisely sequences of problems labeled by a numerical
    parameter. The implementation of a specific algorithm may be boring, a
    task best left to a machine, while the construction of the algorithm
    together with a rigorous proof that it works is a creative and often
    laborious enterprise.

    For illustration consider group theory. A group is defined as a
    structure consisting of a non-empty set and a binary operation obeying
    certain laws. The theory of groups consists of all sentences true of
    all groups; its restriction to the formal "first order" language L
    determined by the group structure is called the elementary theory TG
    of groups. Here we have a formal proof procedure, proven complete by
    Gödel in his PHD thesis the year before his incompleteness proof was
    published. The elementary theory of groups is axiomatizable: it
    consists of exactly those sentences that are derivable from the axioms
    by means of the rules of first order logic. Thus TG is an effectively
    (recursively) enumerable subset of L; a machine, unlimited in power
    and time, could eventually come up with a proof of every elementary
    theorem of group theory. However, a human group theorist would still
    be needed to select the interesting theorems out of the bulk of the
    merely true. The development of TG is no mean task, although its
    language is severely restricted.

    The axiomatizability of a theory always raises the question how to
    recognize the non-theorems. The set FF of those L-sentences that fail
    in some finite group is recursively enumerable by an enumeration of
    all finite groups, a simple matter, in principle. But, as all the
    excitement over the construction of finite simple monsters has amply
    demonstrated, that again is in reality no simple task.

    Neither the theory of finite groups nor the theory of all groups is
    decidable. The most satisfying proof of this fact shows how to
    construct to every pair (A, B) of disjoint recursively enumerable sets
    of L-sentences, where A contains all of TG and B contains FF, a
    sentence S that belongs neither to A nor to B. This is the deep and
    sophisticated theorem of effective non-separability proved in the
    early sixties independently by Mal'cev in the SSSR and Tarski's pupil
    Cobham.

    It follows that constructing infinite counter-examples in group theory
    is a truly creative enterprise, while the theory of finite groups is
    not axiomatizable and so, to recognize a truth about finite groups
    requires deep insight and a creative jump. The concept of finiteness
    in group theory is not elementary and yet we have a clear idea of what
    is meant by talking about all finite groups, a marvelously intriguing
    situation.

    To wind up with a specific answer to the 2005 Question:

    I do believe that every sentence expressible in the formal language of
    elementary group theory is either true of all finite groups or else
    fails for at least one of them.

    This statement may at first sight look like a logical triviality. But
    when you try to prove it honestly you find that you would need a
    decision procedure, which would, given any sentence of L, yield either
    a proof that S holds in all finite groups or else a finite group in
    which S fails. By the inseparability theorem mentioned above, there is
    no such procedure.

    If asked whether I hold the equivalent belief for the theory of all
    groups I would hesitate because the concept of an infinite
    counterexample is not as concrete to my mind as that of the totality
    of all finite groups. These are the areas where personal intuition
    starts to come into play.

                                                   |[340]back to contents|
    ______________________________________________________________________

    [341]DOUGLAS RUSHKOFF
    Media Analyst; Documentary Writer; Author, Media Virus
    [rushkoff100.jpg] I can't prove it more than anecdotally, but I
    believe evolution has purpose and direction. It appears obvious, yet
    absolutely unconfirmable, that matter is groping towards complexity.
    While the laws of nature--and time itself--require objects and life
    forms attain durability and sustainability for survival, it seems to
    me more a means to an end than an end in itself.

    Theology goes a long way towards imbuing substance and processes with
    meaning--describing life as "matter reaching towards divinity," or as
    the process through which divinity calls matter back up into
    itself--but theologians repeatedly make the mistake of ascribing this
    sense of purpose to history rather than the future. This is only
    natural, since the narrative structures we use to understand our world
    tend to have beginnings, middles, and ends. In order to experience the
    pay-off at the end of the story, we need to see it as somehow built-in
    to the original intention of events.
    It's also hard for people to contend with the great probability that
    we are simply over-advanced fungi and bacteria, hurling through a
    galaxy in cold and meaningless space. Our existence may be
    unintentional, meaningless and purposeless; but that doesn't preclude
    meaning or purpose from emerging as a result of our interaction and
    collaboration. Meaning may not be a precondition for humanity, but
    rather a byproduct of it.

    That's why it's so important to recognize that evolution, at its best,
    is a team sport. As Darwin's later, lesser-known, but more important
    works contended, survival of the fittest is not a law applied to
    individuals, but to groups. Just as it is now postulated that
    mosquitoes cause their victims to itch and sweat nervously so that
    other mosquitoes can more easily find the target, most great leaps
    forward in human evolution--from the formation of clans to the
    building of cities--are feats of collaborative effort. Better rates of
    survival are as much a happy side effect of good collaboration as
    their purpose.

    If we could stop relating to meaning and purpose as artifacts of some
    divine creative act, and see them instead as the yield of our own
    creative future, they become goals, intentions, and processes very
    much in reach--rather than the shadows of childlike, superstitious
    mythology

    The proof is impossible, since it is an unfolding one. Like reaching a
    horizon, arrival merely necessitates more travel.

                                                   |[342]back to contents|
    ______________________________________________________________________

    [343]RUDY RUCKER
    Mathematician, Computer Scientist; CyberPunk Pioneer; Novelist;
    Author, Infinity and the Mind
    [rucker100.jpg] Reality Is A Novel.

    I'd like to propose a modified Many Universes theory. Rather than
    saying every possible universe exists, I'd say, rather, that there is
    a sequence of possible universes, akin to the drafts of a novel.
    We're living in a draft version of the universe--and there is no final
    version. The revisions never stop.
    From time to time it's possible to be aware of this. In particular,
    when you relax and stop naming things and forming opinions, your
    consciousness spreads out across several drafts of the universe.
    Things don't need to be particularly one way or the other until you
    pin them down.
    Each draft, each spacetime, each sheet of reality is itself rigorously
    deterministic; there really is no underlying randomness in the world.
    Instead we have a great web of synchronistic entanglements, with
    causes and effects flowing forward and backwards through time. The
    start of a novel matches its ending; the past matches the future.
    Changing one thing changes everything. If we fully know everything
    about the Now moment, we know the entire past and future.
    With this in mind, explaining an given draft of the universe becomes a
    matter of explaining the contents of a single Now moment of that
    draft. This in turn means that we can view the evolution of the
    successive drafts as an evolution of different versions of a
    particular Now moment. As Scarlett's climactic scene with Rhett is
    repeatedly rewritten, all the rest of Gone With The Wind changes to
    match.
    And this evolution, too, can be deterministic. We can figure we think
    of there as being two distinct deterministic rules, a Physics Rule and
    a Metaphysics Rule. The Physics Rule consists of time-reversible laws
    that grow the Now moment upwards and downwards to fill out the entire
    past and future of spacetime. And we invoke the Metaphysics Rule to
    account for the contents of the Now moment. The Metaphysics Rule is
    deterministic but not reversible; it grows sideways across a dimension
    that we might call paratime, turning some simple seed into the
    space-filling pattern found in the Now.
    The Metaphysics rule is...what? One possibility is that it's something
    quite simple, perhaps as simple as an eight-bit cellular automaton
    rule generating complex-looking patterns out of pure computation. Or
    perhaps the Metaphysics rule is like the mind of an author creating a
    novel, searching out the best word to write next, somehow peering into
    alternate realities. Or, yet again, the big Metaphysics rule in the
    sky could be the One cosmic mind, the Big Aha, the eternal secret,
    living in the spaces between your thoughts.

                                                   |[344]back to contents|
    ______________________________________________________________________

    [345]RUPERT SHELDRAKE
    Biologist, London; Author of The Presence of the Past

    [sheldrake100.jpg] I believe, but cannot prove, that memory is
    inherent in nature. Most of the so-called laws of nature are more like
    habits.

    There is no need to suppose that all the laws of nature sprang into
    being fully formed at the moment of the Big Bang, like a kind of
    cosmic Napoleonic code, or that they exist in a metaphysical realm
    beyond time and space.

    Before the general acceptance of the Big Bang theory in the 1960s,
    eternal laws seemed to make sense. The universe itself was thought to
    be eternal and evolution was confined to the biological realm. But we
    now live in a radically evolutionary universe.

    If we want to stick to the idea of natural laws, we could say that as
    nature itself evolves, the laws of nature also evolve, just as human
    laws evolve over time. But then how would natural laws be remembered
    or enforced? The law metaphor is embarrassingly anthropomorphic.
    Habits are less human-centred. Many kinds of organisms have habits,
    but only humans have laws.

    Habits are subject to natural selection; and the more often they are
    repeated, the more probable they become, other things being equal.
    Animals inherit the successful habits of their species as instincts.
    We inherit bodily, emotional, mental and cultural habits, including
    the habits of our languages.

    The habits of nature depend on non-local similarity reinforcement.
    Through a kind of resonance, the patterns of activity in
    self-organizing systems are influenced by similar patterns in the
    past, giving each species and each kind of self-organizing system a
    collective memory.

    Is this just a vague philosophical idea? I believe it can be
    formulated as a testable scientific hypothesis.

    My interest in evolutionary habits arose when I was engaged in
    research in developmental biology, and was reinforced by reading
    Charles Darwin, for whom the habits of organisms were of central
    importance. As Francis Huxley has pointed out, Darwin's most famous
    book could more appropriately have been entitled The Origin of Habits.

    Over the course of fifteen years of research on plant development, I
    came to the conclusion that for understanding the development of
    plants, their morphogenesis, genes and gene products are not enough.
    Morphogenesis also depends on organizing fields. The same arguments
    apply to the development of animals. Since the 1920s many
    developmental biologists have proposed that biological organization
    depends on fields, variously called biological fields, or
    developmental fields, or positional fields, or morphogenetic fields.

    All cells come from other cells, and all cells inherit fields of
    organization. Genes are part of this organization. They play an
    essential role. But they do not explain the organization itself. Why
    not?

    Thanks to molecular biology, we know what genes do. They enable
    organisms to make particular proteins. Other genes are involved in the
    control of protein synthesis. Identifiable genes are switched on and
    particular proteins made at the beginning of new developmental
    processes. Some of these developmental switch genes, like the Hox
    genes in fruit flies, worms, fish and mammals, are very similar. In
    evolutionary terms, they are highly conserved. But switching on genes
    such as these cannot in itself determine form, otherwise fruit flies
    would not look different from us.

    Many organisms live as free cells, including many yeasts, bacteria and
    amoebas. Some form complex mineral skeletons, as in diatoms and
    radiolarians, spectacularly pictured in the nineteenth century by
    Ernst Haeckel. Just making the right proteins at the right times
    cannot explain such structures without many other forces coming into
    play, including the organizing activity of cell membranes and
    microtubules.

    Most developmental biologists accept the need for a holistic or
    integrative conception of living organization. Otherwise biology will
    go on floundering, even drowning, in oceans of data, as yet more
    genomes are sequenced, genes are cloned and proteins are
    characterized.

    I suspect that morphogenetic fields work by imposing patterns on the
    otherwise random or indeterminate patterns of activity. For example
    they cause microtubules to crystallize in one part of the cell rather
    than another, even though the subunits from which they are made are
    present throughout the cell.

    Morphogenetic fields are not fixed forever, but evolve. The fields of
    Afghan hounds and poodles have become different from those of their
    common ancestors, wolves. How are these fields inherited? I believe,
    but cannot prove, that they are transmitted by a kind of non-local
    resonance, and I have suggested the term morphic resonance for this
    process.

    The fields organizing the activity of the nervous system are likewise
    inherited through morphic resonance, conveying a collective,
    instinctive memory. The resonance of a brain with its own past states
    also helps to explain the memories of individual animals and humans.

    Social groups are likewise organized by fields, as in schools of fish
    and flocks of birds. Human societies have memories that are
    transmitted through the culture of the group, and are most explicitly
    communicated through the ritual re-enactment of a founding story or
    myth, as in the Jewish Passover celebration, the Christian Holy
    Communion and the American thanksgiving dinner, through which the past
    become present through a kind of resonance with those who have
    performed the same rituals before.

    Others may prefer to dispense with the idea of fields and explain the
    evolution of organization in some other way, perhaps using more
    general terms like "emergent systems properties". But whatever the
    details of the models, I believe that the natural selection of habits
    will play an essential part in any integrated theory of evolution,
    including not just biological evolution, but also physical, chemical,
    cosmic, social, mental and cultural evolution.

                                                   |[346]back to contents|
    ______________________________________________________________________

    [347]CHRISTINE FINN
    Archaeologist; Journalist; Writer-in-Residence, University of
    Bradford; Author, Past Poetic
    [finn100.jpg]

    I have a belief that modern humans are greatly under-utilising their
    cognitive capabilities. Finding proof of this, however, would lie in
    embracing those very same sentient possibilities--visceral
    hunches--which were possibly part of the world of archaic humans. This
    enlarged realm of the senses acknowledges reason, but also heeds the
    grip of the gut, the body poetic.

                                                   |[348]back to contents|
    ______________________________________________________________________

    [349]NED BLOCK
    Philosopher and Psychologist, New York University
    [block100.jpg] I believe that the "Hard Problem of Consciousness" will
    be solved by conceptual advances made in connection with cognitive
    neuroscience. Let me explain. No one has a clue (at the moment) how to
    answer the question of why the neural basis of the phenomenal feel of
    my experience of red is the neural basis of that phenomenal feel
    rather than a different one or none at all. There is an "explanatory
    gap" here which no one has a clue how to close. This problem is
    conceptually and explanatorily prior to the issue of what the nature
    of the self is, as can be seen in part by noting that the problem
    would persist even for experiences that are not organized into selves.
    No doubt closing the explanatory gap will require ideas that we cannot
    now anticipate. The mind-body problem is so singular that no appeal to
    the closing of past explanatory gaps really justifies optimism, but I
    am optimistic nonetheless.

                                                   |[350]back to contents|
    ______________________________________________________________________

    [351]REBECCA GOLDSTEIN
    Philosopher and Novelist, Trinity College; Author, Incompleteness
    [goldstein100.jpg] I believe that scientific theories are a means of
    going--somewhat mysteriously--beyond what we are able to observe of
    the physical world, penetrating into the structure of nature. The
    "theoretical" parts of scientific theories--the parts that speak in
    seemingly non-observational terms--aren't, I believe, ultimately
    translatable into observations or aren't just algorithmic black boxes
    into which we feed our observations and churn out our predictions. I
    believe the theoretical parts have descriptive content and are true
    (or false) in the same prosaic way that the observational parts of
    theories are true (or false). They're true if and only if they
    correspond to reality.

    I also believe that my belief about scientific theories isn't itself
    scientific. Science itself doesn't decide how it is to be interpreted,
    whether realistically or not.

    That the penetration into unobservable nature is accomplished by way
    of abstract mathematics is a large part of what makes it
    mystifying--mystifying enough to be coherently if unpersuasively (at
    least to me) denied by scientific anti-realists. It's difficult to
    explain exactly how science manages to do what it is that I believe it
    does--notoriously difficult when trying to explain how quantum
    mechanics, in particular, describes unobserved reality. The
    unobservable aspects of nature that yield themselves to our knowledge
    must be both mathematically expressible and connected to our
    observations in requisite ways. The seventeenth-century titans, men
    like Galileo and Newton, figured out how to do this, how to wed
    mathematics to empiricism. It wasn't a priori obvious that it was
    going to work. It wasn't a priori obvious that it was going to get us
    so much farther into nature's secrets than the Aristotelian
    teleological methodology it was supplanting. A lot of assumptions
    about the mathematical nature of the world and its fundamental
    correspondence to our cognitive modes (a correspondence they saw as
    reflective of God's friendly intentions toward us) were made by them
    in order to justify their methodology.

    I also believe that since not all of the properties of nature are
    mathematically expressible--why should they be? it takes a very
    special sort of property to be so expressible--that there are aspects
    of nature that we will never get to by way of our science. I believe
    that our scientific theories--just like our formalized mathematical
    systems (as proved by Gödel)--must be forever incomplete. The very
    fact of consciousness itself (an aspect of the material world we
    happen to know about, but not because it was revealed to us by way of
    science) demonstrates, I believe, the necessary incompleteness of
    scientific theories.

                                                   |[352]back to contents|
    ______________________________________________________________________

    [353]JONATHAN HAIDT
    Psychologist, University of Virginia
    [haidt100.jpg] I believe, but cannot prove, that religious experience
    and practice is generated and structured largely by a few emotions
    that evolved for other reasons, particularly awe, moral elevation,
    disgust, and attachment-related emotions. That's not a prediction
    likely to raise any eyebrows in this forum.

    But I further believe (and cannot prove) that hostility toward
    religion is an obstacle to progress in psychology. Most human beings
    live in a world full of magic, miracles, saints, and constant commerce
    with divinity. Psychology at present has little to say about these
    parts of life; we focus instead on a small set of topics that are
    fashionable, or that are particularly tractable with our favorite
    methods. If psychologists took religious experience seriously and
    tried to understand it from the inside, as anthropologists do with
    other cultures, I believe it would enrich our science. I have found
    religious texts and testimonials about purity and pollution essential
    for understanding the emotion of disgust.

                                                   |[354]back to contents|
    ______________________________________________________________________

    [355]DONALD I. WILLIAMSON
    Biologist, University of Liverpool; Author, The Origins of Larvae
    [williamson100.jpg] I believe I can explain the Cambrian explosion.
    The Cambrian explosion refers to the first appearance in a relatively
    short space of geological time of a very wide assortment of animals
    more than 500 million years ago. I believe it came about through
    hybridization.
    Many well preserved Cambrian fossils occur in the Burgess shale, in
    the Canadian Rockies. These fossils include small and soft-bodied
    animals, several of which were planktonic but none were larvae.
    Compared with modern animals, some of them seem to have the front end
    of one animal and rear end of another. Modern larvae present a
    comparable set-up: larvae seem to be derived from animals in different
    groups from their corresponding adults. I have amassed a bookful of
    evidence that the basic forms of larvae did indeed originate as
    animals in other groups and that such forms were transferred by
    hybridization. Animals with larvae are "sequential chimeras", in which
    one body-form--the larva--is followed by another, distantly related
    form--the adult. I believe there were no Cambrian larvae, and Cambrian
    hybridizations produced "concurrent chimeras", in which two distantly
    related body-forms appeared together.
    About 700 million years ago, shortly before the Cambrian, animals with
    tissues (metazoans) made their first appearance. I agree with Darwin
    that there were several different forms (Darwin suggested four or
    five), and I believe they resulted from hybridizations between
    different colonial protists. Protists are mostly single-celled, but
    colonial forms consist of many similar cells. All Cambrian animals
    were marine, and, like most modern marine animals, they shed their
    eggs and sperm into the water, where fertilization took place. Eggs of
    one species frequently encountered sperm of another, and there were
    only poorly developed mechanisms to prevent hybridization. Early
    animals had small genomes, leaving plenty of spare gene capacity.
    These factors led to many fruitful hybridizations, which resulted in
    concurrent chimeras. Not only did the original metazoans hybridize but
    the new animals resulting from these hybidizations also hybridized,
    and this produced the explosion in animal form.
    The acquisition of larvae by hybridization came much later, when there
    was little spare genome capacity in recipes for single animals, and it
    is still going on. In the echinoderms (the group that includes
    sea-urchins and starfish) there is evidence that there were no larvae
    in either the Cambrian or the Ordovician (the following period), and
    this might well apply to other major groups. Acquiring parts, rather
    than larvae, by hybridization continued, I believe, throughout the
    Cambrian and Ordovician and probably later, but, as genomes became
    larger and filled most of the available space, later hybridizations
    led to smaller changes in adult form or to acquisitions of larvae. The
    gradual evolution of better mechanisms to prevent eggs being
    fertilized by foreign sperm resulted in fewer fruitful hybridizations,
    but occasional hybridizations still take place.
    Hybridogenesis, the generation of new organisms by hybridization, and
    symbiogenesis, the generation of new organisms by symbiosis, both
    involve fusion of lineages, whereas Darwinian "descent with
    modification" is entirely within separate lineages. These forms of
    evolution function in parallel, and "natural selection" works on the
    results.
    I cannot prove that Cambrian animals had poorly developed specificity
    and spare gene capacity, but it makes sense.

                                                   |[356]back to contents|
    ______________________________________________________________________

    [357]SETH LLOYD
    Quantum Mechanical Engineer, Massachusetts Institute of Technology

    [lloyd100.jpg]
    I believe in science. Unlike mathematical theorems, scientific results
    can't be proved.They can only be tested again and again, until only a
    fool would not believe them.

    I cannot prove that electrons exist, but I believe fervently in their
    existence. And if you don't believe in them, I have a high voltage
    cattle prod I'm willing to apply as an argument on their behalf.
    Electrons speak for themselves.

                                                   |[358]back to contents|
    ______________________________________________________________________

    [359]MARTIN NOWAK
    Biological Mathematician, Harvard University; Director, Center for
    Evolutionary Dynamics

    [nowak100.jpg]
    I believe the following aspects of evolution to be true without
    knowing how to turn them into (respectable) research topics.

      Important steps in evolution are robust. Multi-cellularity evolved
      at least ten times. There are several independent origins of
      eusociality. There were a number of lineages leading from primates
      to humans. If our ancestors would not have evolved language,
      somebody else would have.
      Cooperation and language define humanity. Every special trait of
      humans is derivative of language.
      Mathematics is a language and therefore a product of evolution.

                                                   |[360]back to contents|
    ______________________________________________________________________

    [361]W. DANIEL HILLIS
    Physicist, Computer Scientist; Chairman, Applied Minds, Inc.; Author,
    The Pattern on the Stone

    [hillis100.jpg] I know that it sounds corny, but I believe that people
    are getting better. In other words, I believe in moral progress. It is
    not a steady progress, but there is a long-term trend in the right
    direction--a two steps forward, one step back kind of progress.
    I believe, but cannot prove, that our species is passing through a
    transitional stage, from being animals to being true humans. I do not
    pretend to understand what true humans will be like, and I expect that
    I would not even understand it if I met them. Yet, I believe that our
    own universal sense of right and wrong is pointing us in the right
    direction, and that it is the direction of our future.
    I believe that ten thousand years from now, people (or whatever we are
    by then) will be more empathetic and more altruistic than we are. They
    will trust each other more, and for good reason. They will take better
    care of each other. They be more thoughtful about the broader
    consequences of their actions. They will take better care of their
    future than we do of ours.

                                                   |[362]back to contents|
    ______________________________________________________________________

    [363]ROBERT R. PROVINE
    Psychologist and Neuroscientist, University of Maryland; Author,
    Laughter

    [provine100.jpg] Human Behavior is Unconsciously Controlled.

    Until proven otherwise, why not assume that consciousness does not
    play a role in human behavior? Although it may seem radical on first
    hearing, this is actually the conservative position that makes the
    fewest assumptions. The null position is an antidote to philosopher's
    disease, the inappropriate attribution of rational, conscious control
    over processes that may be irrational and unconscious. The argument
    here is not that we lack consciousness, but that we over-estimate the
    conscious control of behavior. I believe this statement to be true.
    But proving it is a challenge because it's difficult to think about
    consciousness. We are misled by an inner voice that generates a
    reasonable but often fallacious narrative and explanation of our
    actions. That the beam of conscious awareness that illuminates our
    actions is on only part of the time further complicates the task.
    Since we are not conscious of our state of unconsciousness, we vastly
    overestimate the amount of time that we are aware of our own actions,
    whatever their cause.

    My thinking about unconscious control was shaped by my field studies
    of the primitive play vocalization of laughter. When I asked people to
    explain why they laughed in a particular situation, they would concoct
    some reasonable fiction about the cause of their behavior--"someone
    did something funny," "it was something she said," "I wanted to put
    her at ease." Observations of social context showed that such
    explanations were usually wrong. In clinical settings, such post hoc
    misattributions would be termed "confabulations," honest but flawed
    attempts to explain one's actions.

    Subjects also incorrectly presumed that laughing is a choice and under
    conscious control, a reason for their confident, if bogus,
    explanations of their behavior. But laughing is not a matter speaking
    "ha-ha," as we would choose a word in speech. When challenged to laugh
    on command, most subjects could not do so. In certain, usually
    playful, social contexts, laughter simply happens. However, this lack
    of voluntary control does not preclude a lawful pattern of behavior.
    Laughter appears at those places where punctuation would appear in a
    transcription of a conversation--laughter seldom interrupts the phrase
    structure of speech. We may say, "I have to go now--ha-ha," but
    rarely, "I have to--ha-ha--go now." This punctuation effect is highly
    reliable and requires the coordination of laughing with the linguistic
    structure of speech, yet it is performed without conscious awareness
    of the speaker. Other airway maneuvers such as breathing and coughing
    punctuate speech and are performed without speaker awareness.

    The discovery of lawful but unconsciously controlled laughter produced
    by people who could not accurately explain their actions led me to
    consider the generality of this situation to other kinds of behavior.
    Do we go through life listening to an inner voice that provides
    similar confabulations about the causes of our action? Are essential
    details of the neurological process governing human behavior
    inaccessible to introspection? Can the question of animal
    consciousness be stood on its head and treated in a more parsimonious
    manner? Instead of considering whether other animals are conscious, or
    have a different, or lesser consciousness than our own, should we
    question if our behavior is under no more conscious control than
    theirs? The complex social order of bees, ants, and termites documents
    what can be achieved with little, if any, conscious control as we
    think of it. Is machine consciousness possible or even desirable? Is
    intelligent behavior a sign of conscious control? What kinds of tasks
    require consciousness? Answering these questions requires an often
    counterintuitive approach to the role, evolution and development of
    consciousness.

                                                   |[364]back to contents|
    ______________________________________________________________________

    [365]PAUL BLOOM
    Psychologist, Yale University; Author, Descartes' Baby

    [bloom100.jpg] John MacNamara once proposed that children come to
    learn about right and wrong, good and evil, in much the same way that
    they learn about geometry and mathematics. Moral development is not
    merely cultural learning, and it does not arise from innate principles
    that have evolved through natural selection. It is not like the
    development of language or sexual preference or taste in food.
    Instead, moral development involves the construction of a intricate
    formal system that makes contact with the external world in a
    significant way. This cannot be entirely right. We know that
    gut-feelings, such as reactions of empathy or disgust, have a major
    influence on how children and adults reason about morality. And no
    serious theory of moral development can ignore the role of natural
    selection in shaping our moral intuitions. But what I like about
    Macnamara's proposal is that it allows for moral realism. It allows
    for the existence of moral truths that people come to discover, just
    as we come to discover truths of mathematics. We can reject the
    nihilistic position (help by many researchers) that our moral
    intuitions are nothing more than accidents of biology or culture.
    And so I believe (though I cannot prove it) that the development of
    moral reasoning is the same sort of process as the development of
    mathematical reasoning.

                                                   |[366]back to contents|
    ______________________________________________________________________

    [367]PHILIP ZIMBARDO
    Psychologist, Emeritus Professor, Stanford University; Author, Shyness
    [zimbardo100.jpg] I believe that the prison guards at the Abu Ghraib
    Prison in Iraq, who worked the night shift in Tier 1A, where prisoners
    were physically and psychologically abused, had surrendered their free
    will and personal responsibility during these episodes of mayhem.
    But I could not prove it in a court of law. These eight army
    reservists were trapped in a unique situation in which the behavioral
    context came to dominate individual dispositions, values, and morality
    to such an extent that they were transformed into mindless actors
    alienated from their normal sense of personal accountability for their
    actions--at that time and place.

    The "group mind" that developed among these soldiers was created by a
    set of known social psychological conditions, some of which are nicely
    featured in Golding's Lord of the Flies. The same processes that I
    witnessed in my Stanford Prison Experiment were clearly operating in
    that remote place: Deindividuation, dehumanization, boredom,
    groupthink, role-playing, rule control, and more. Beyond the
    relatively benign conditions in my study, in that Iraqi prison, the
    guards experienced extreme fatigue and exhaustion from working 12-hour
    shifts, 7 days a week, for over a month at a time with no breaks.

    There was fear of being killed from mortar and grenade attacks and
    from prisoners rioting. There was revenge for buddies killed, and
    prejudice against these foreigners for their strange religion and
    cultural traditions. There was encouragement by staff "to soften up"
    the detainees for interrogation because Tier 1A was the
    Interrogation-Soft Torture center of that prison. Already in place
    when these young men and women arrived there for their tour of duty
    were abusive practices that had been "authorized" from the top of the
    chain of command: Use of nakedness as a humiliation tactic, sensory
    and sleep deprivation, stress positions, dog attacks, and more.

    In addition to the situational variables and processes operating in
    that behavioral setting were a serious of systemic processes that
    created the barrel into which these good soldiers were forced to live
    and work. Most of the reports of independent investigation committees
    cite a failure of leadership, lack of leadership, or irresponsible
    leadership as factors that contributed to these abuses. Then there was
    lack of mission-specific training of the guards, no oversight, no
    accountability to senior officers, poor resources, overcrowded
    facilities, confusing commands from civilian interrogators at odds
    with the CIA, military intelligence and other agencies and agents all
    working in Tier 1A without clear communication channels and much
    confusion.

    I was recently an expert witness for the defense of Sgt. Ivan "Chip"
    Frederick in his Baghdad trial. Before the trial, I spent a day with
    him, giving him an in-depth interview, checking all background
    information, and arranging for him to be psychologically assessed by
    the military. He is one of the alleged "bad apples" who these
    investigations have labeled as "morally corrupt." What did he bring
    into that situation and what did that situation bring into him?

    He seemed very much to be a normal young American. His psych
    assessments revealed no sign of any pathology, no sadistic tendencies,
    and all his psych assessment scores are in the normal range, as is his
    intelligence. He had been a prison guard at a small minimal security
    prison where he performed for many years without incident. So there is
    nothing in his background, temperament, or disposition that could have
    been a facilitating factor for the abuses he committed at the Abu
    Ghraib Prison.

    After a four-day long trial, part of which included my testimony
    elaborating on the points noted here, the Judge took barely one hour
    to find him guilty of all eight counts and to sentence Sgt. Frederick
    to 8 years in prison, starting in solitary confinement in Kuwait,
    dishonorable discharge, broken in rank from Sgt. to Pvt., loss of his
    20 years retirement income and his salary. This military judge held
    Frederick personally responsible for the abuses, because he had acted
    out of free will to intentionally harm these detainees since he was
    not forced into these acts, was not mentally incompetent, or acting in
    self-defense. All of the situational and systemic determinants of his
    behavior and that of his buddies were disregarded and given a zero
    weighting coefficient in assessing causal factors.

    The real reason for the heavy sentence was the photographic
    documentation of the undeniable abuses along with the smiling abusers
    in their "trophy photos." It was the first time in history that such
    images were publicly available of what goes on in many prisons around
    the world, and especially in military prisons. They humiliated the
    military, and the entire chain of command all the way up the ladder to
    the White House. Following this exposure, investigations of all
    American military prisons in that area of the world uncovered similar
    abuses and worse, many murders of prisoners. Recent evidence has
    revealed that similar abuses started taking place again in Abu Ghraib
    prison barely one month after these disclosures became public--when
    the "Evil Eight Culprits" were in other prisons--as prisoners.

    Based on more than 30 years of research on "The Lucifer Effect"--the
    transformation of good people into perpetrators of evil--I believe
    that there are powerful situational and systemic forces operating on
    individuals in certain situations that can undercut a lifetime of
    morality and rationality. The Dionysian aspect of human nature can
    triumph over the Apollonian, not only during Mardi Gras, but in
    dynamic group settings like gang rapes, fraternity hazing, mob riots,
    and in that Abu Ghraib prison. I believe in that truth in general and
    especially in the case of Sgt. Frederick, but I was not able to prove
    it in a military court of law.

                                                   |[368]back to contents|
    ______________________________________________________________________

    [369]ALUN ANDERSON
    Editor-in-Chief, New Scientist

    [andersona100.jpg] Strangely, I believe that cockroaches are
    conscious. That is probably an unappealing thought to anyone who
    switches on a kitchen light in the middle of the night and finds a
    family of roaches running for cover. But it's really shorthand for
    saying that I believe that many quite simple animals are conscious,
    including more attractive beasts like bees and butterflies.

    I can't prove that they are, but I think in principle it will be
    provable one day and there's a lot to be gained about thinking about
    the worlds of these relatively simple creatures, both
    intellectually--and even poetically. I don't mean that they are
    conscious in even remotely the same way as humans are; if that we were
    true the world would be a boring place. Rather the world is full of
    many overlapping alien consciousnesses.

    Why do I think they might be multiple forms of conscious out there?
    Before becoming a journalist I spent 10 years and a couple of
    post-doctoral fellowships getting inside the sensory worlds of a
    variety of insects, including bees and cockroaches. I was inspired by
    A Picture Book of Invisible Worlds, a slim out-of-print volume by
    Jakob von Uexkull (1864-1944).
    The same book had also inspired Niko Tinbergen and Konrad Lorenz, the
    Nobel Prize winners who founded the field of ethology (animal
    behaviour). Von Uexkull studied the phenomenal world of animals, what
    he called their "umwelt", the worlds around animals as they themselves
    perceive them. Everything that an animals senses means something to
    it, for it has evolved to fit and create its world. Study of animals
    and their sensory worlds have now morphed into the field of sensory
    ecology, or on a wilder path, the newer science of biosemiotics.

    I studied time studying how honey bees could find their way around my
    laboratory room (they had learnt to fly in through a small opening in
    the window) and find a hidden source of sugar. Bees could learn all
    about the pattern of key features in the room and would show they were
    confused if objects were moved around when they were out of the room.
    They were also easily distracted by certain kinds of patterns,
    particularly ones with lots of points and lines that had very abstract
    similarities to the patterns on flowers, as well as by floral scents,
    and by sudden movements that signalled danger. In contrast, when they
    were busy gorging on the sugar almost nothing could distract them,
    making it possible for myself to paint a little number on their backs
    so I distinguish individual bees.

    To make sense of this ever changing behaviour, with its shifting focus
    of attention, I always found it simplest to figure out what was
    happening by imagining the sensory world of the bee, with its eye
    extraordinarily sensitive to flicker and colours we can't see, as a
    "visual screen" in the same way I can sit back and "see" my own visual
    screen of everything happening around me, with sights and sounds
    coming in and out of prominence. The objects in the bees world have
    significances or "meaning" quite different from our own, which is why
    its attention is drawn to things we would barely perceive.

    That's what I mean by consciousness--the feeling of "seeing" the world
    and its associations. For the bee, it is the feeling of being a bee. I
    don't mean that a bee is self-conscious or spends time thinking about
    itself. But of course the problem of why the bee has its own "feeling"
    is the same incomprehensible "hard problem" of why the activity of our
    nervous system gives rise to our own "feelings".

    But at least the bee's world is very visual and capable of being
    imagined. Some creatures live in sensory worlds that are much harder
    to access. Spiders that hunt at night live in a world dominated by the
    detection of faint vibration and of the tiniest flows of air that
    allow them to see fly passing by in pitch darkness. Sensory hairs that
    cover their body give them a sensitivity to touch far more finely
    grained than we can possibly feel through our own skin.

    To think this way about simple creatures is not to fall into the
    anthropomorphic fallacy. Bees and spiders live in their own world in
    which I don't see human-like motives. Rather it is a kind of
    panpsychism, which I am quite happy to sign up to, at least until we
    know a lot more about the origin of consciousness. That may take me
    out of the company of quite a few scientists who would prefer to
    believe that a bee with a brain of only a million neurones must surely
    be a collection of instinctive reactions with some simple switching
    mechanism between then, rather have some central representation of
    what is going on that might be called consciousness. But it leaves me
    in the company of poets who wonder at the world of even lowly
    creatures.

      "In this falling rain,
      where are you off to
      snail?"

    wrote the haiku poet Issa.

    And as for the cockroaches, they are a little more human than the
    spiders. Like the owners of the New York apartments who detest them,
    they suffer from stress and can die from it, even without injury. They
    are also hierarchical and know their little territories well. When
    they are running for it, think twice before crushing out another
    world.

                                                   |[370]back to contents|
    ______________________________________________________________________

    [371]MARGARET WERTHEIM
    Science writer and Commentator; Author, Pythagoras' Trousers
    [wertheim100.jpg] We all believe in something and science itself is
    premised on a whole set of beliefs. Above all, science is founded on
    the belief that things are comprehensible and that by the ingenuity of
    our minds and the probing of ever more subtle instruments we will
    ultimately come to know It All. But is the All inherently knowable? I
    believe, though I cannot prove it, that there will always be things we
    do not know--large things, small things, interesting things and
    important things.
    If theoretical physics is any guide we might suppose that science is a
    march towards a finite goal. For the past few decades theoretical
    physicists have been searching for a so-called "Theory of Everything,"
    what Nobel laureate Stephen Weinberg has also called a "Final Theory."
    This "ultimate" set of equations that would tie together all the
    fundamental forces which physicists recognize today--the four
    essential powers of gravity, electromagnetism, and the nuclear forces
    inside the cores of atoms. But such theory--if we are lucky enough to
    extract it from the current mass of competing contenders--would not
    tell us anything about how proteins form or how DNA came into being.
    Less still would it illuminate the machinations of a living cell, or
    the workings of the human mind. Frankly, a "theory of everything"
    would not even help us to understand how snowflakes form.
    In an age when we have discovered the origin of the universe and
    observed the warping of space and time it is shocking to hear that
    scientists do not understand something as "paltry" as the formation of
    ice crystals. But that is indeed the case.

    Kenneth Libbrecht, chairman of the Cal tech physics department is a
    world expert on ice crystal formation, a hobby project he took on more
    than twenty years ago precisely because as he puts it "there are six
    billion people on this planet, and I thought that at least one of us
    should understand how snow crystals form." After two decades of
    meticulous experimentation inside specially constructed pressurized
    chambers Libbrecht believes he has made some headway in understanding
    how ice crystallizes at the edge of the quasi-liquid layer which
    surrounds all ice structures. He calls his theory "structure dependent
    attachment kinetics," but he is quick to point out that this is far
    from the ultimate answer. The transition from water to ice is a
    mysteriously complex process that has engaged minds as brilliant as
    Johannes Kepler and Michael Faraday. Libbrecht hopes he can add the
    small next step in our knowledge of this wondrous substance that is so
    central to life itself.
    Studying ice crystals is Libbrecht's hobby--in his "day job" he is one
    of the hundreds of physicists who are working on the LIGO detector
    which is designed to detect gravitational waves that are believed to
    emanate from black holes and other massive cosmological entities.
    Gravity waves have been predicted by the general theory of relativity,
    and hence physicists believe they must exist. Here the matter of
    belief has literally bought into being a an extremely expensive
    machine. Any successful theory of everything will have to account for
    gravity, the most mysterious of all the forces and the one physicists
    least understand. Like the other three forces, physicists believe
    gravity must ultimately manifest itself in both wave and particle
    forms. LIGO is designed to detect such waves, if indeed they do exist.
    Some years ago the science writer John Horgan wrote a marvelously
    provocative book in which he suggested that science was coming to an
    end, all the major theoretical edifices now supposedly being in place.
    Horgan was right in one sense, for high-energy physics may be on the
    verge of achieving its final unification. But in so many other areas,
    science is just beginning. Only now are we acquiring the scientific
    tools and techniques to begin to investigate how our atmosphere works,
    how ecological systems function, how genes create proteins, how cells
    evolve, and how brains work. The very success of "fundamental science"
    has opened doors undreamed of by earlier generations and in many ways
    it seems there is more than ever that we do not know. At a time when
    journals tout theories about how to create entire universes it is easy
    to imagine that science has grasped the whole of reality. In truth our
    ignorance is vast--and personally I believe it will always be so.
    Rather than pretend we will soon know it all, I suggest we might adopt
    instead the attitude of the great fifteenth century champion of
    science, Cardinal Nicholas of Cusa. Cusa titled his major work On
    Learned Ignorance. A complex and poetic fusion of mathematics,
    scientific speculation and Catholic theology, Cusa puts forward in
    this book the view that we can never --even in principle--know
    everything. Only God can do that. We mortals, confined within the
    world itself can never see it whole, from the outside as it were. But
    while we cannot know it All, Cusa insists we can know a great deal and
    that science and mathematics will take our knowledge forward. Our
    ignorance then can be ever more learned. Not omniscience then, but an
    ever more subtle and insightful unknowing is the goal that Cusa
    advocated. In the humble snowflakes Ken Libbrecht studies we have the
    perfect metaphor for such a view--though they melt on your tongue,
    each tiny crystal of ice encapsulates a universe whose basic rules we
    have barely begun to unravel.

                                                   |[372]back to contents|
    ______________________________________________________________________

    [373]KENNETH FORD
    Physicist; Retired director, American Institute of Physics; Author,
    The Quantum World
    [ford100.jpg] I believe that microbial life exists elsewhere in our
    galaxy.

    I am not even saying "elsewhere in the universe." If the proposition I
    believe to be true is to be proved true within a generation or two, I
    had better limit it to our own galaxy. I will bet on its truth there.

    I believe in the existence of life elsewhere because chemistry seems
    to be so life-striving and because life, once created, propagates
    itself in every possible direction. Earth's history suggests that
    chemicals get busy and create life given any old mix of substances
    that includes a bit of water, and given practically any old source of
    energy; further, that life, once created, spreads into every nook and
    cranny over a wide range of temperature, acidity, pressure, light
    level, and so on.

    Believing in the existence of intelligent life elsewhere in the galaxy
    is another matter. Good luck to the SETI people and applause for their
    efforts, but consider that microbes have inhabited Earth for at least
    75 percent of its history, whereas intelligent life has been around
    for but the blink of an eye, perhaps 0.02 percent of Earth's history
    (and for nearly all of that time without the ability to communicate
    into space). Perhaps intelligent life will have staying power. We
    don't know. But we do know that microbial life has staying power.

    Now to a supposition: that Mars will be found to have harbored life
    and harbors life no more. If this proves to be the case, it will be an
    extraordinarily sobering discovery for humankind, even more so than
    the view of our fragile blue ball from the Moon, even more so than our
    removal from the center of the universe by Copernicus, Galileo, and
    Newton--perhaps even more so than the discovery of life elsewhere in
    the galaxy.

                                                   |[374]back to contents|
    ______________________________________________________________________

    [375]DONALD HOFFMAN
    Cognitive Scientist, UC, Irvine; Author, Visual Intelligence
    [hoffman100.jpg] I believe that consciousness and its contents are all
    that exists. Spacetime, matter and fields never were the fundamental
    denizens of the universe but have always been, from their beginning,
    among the humbler contents of consciousness, dependent on it for their
    very being.

    The world of our daily experience--the world of tables, chairs, stars
    and people, with their attendant shapes, smells, feels and sounds--is
    a species-specific user interface to a realm far more complex, a realm
    whose essential character is conscious. It is unlikely that the
    contents of our interface in any way resemble that realm. Indeed the
    usefulness of an interface requires, in general, that they do not. For
    the point of an interface, such as the windows interface on a
    computer, is simplification and ease of use. We click icons because
    this is quicker and less prone to error than editing megabytes of
    software or toggling voltages in circuits. Evolutionary pressures
    dictate that our species-specific interface, this world of our daily
    experience, should itself be a radical simplification, selected not
    for the exhaustive depiction of truth but for the mutable pragmatics
    of survival.

    If this is right, if consciousness is fundamental, then we should not
    be surprised that, despite centuries of effort by the most brilliant
    of minds, there is as yet no physicalist theory of consciousness, no
    theory that explains how mindless matter or energy or fields could be,
    or cause, conscious experience. There are, of course, many proposals
    for where to find such a theory--perhaps in information, complexity,
    neurobiology, neural darwinism, discriminative mechanisms, quantum
    effects, or functional organization. But no proposal remotely
    approaches the minimal standards for a scientific theory: quantitative
    precision and novel prediction. If matter is but one of the humbler
    products of consciousness, then we should expect that consciousness
    itself cannot be theoretically derived from matter. The mind-body
    problem will be to physicalist ontology what black-body radiation was
    to classical mechanics: first a goad to its heroic defense, later the
    provenance of its final supersession.

    The heroic defense will, I suspect, not soon be abandoned. For the
    defenders doubt that a replacement grounded in consciousness could
    attain the mathematical precision or impressive scope of physicalist
    science. It remains to be seen, of course, to what extent and how
    effectively mathematics can model consciousness. But there are
    fascinating hints: According to some of its interpretations, the
    mathematics of quantum theory is itself, already, a major advance in
    this project. And perhaps much of the mathematical progress in the
    perceptual and cognitive sciences can also be so interpreted. We shall
    see.

    The mind-body problem may not fall within the scope of physicalist
    science, since this problem has, as yet, no bona fide physicalist
    theory. Its defenders can surely argue that this penury shows only
    that we have not been clever enough or that, until the right mutation
    chances by, we cannot be clever enough, to devise a physicalist
    theory. They may be right. But if we assume that consciousness is
    fundamental then the mind-body problem transforms from an attempt to
    bootstrap consciousness from matter into an attempt to bootstrap
    matter from consciousness. The latter bootstrap is, in principle,
    elementary: Matter, spacetime and physical objects are among the
    contents of consciousness.

    The rules by which, for instance, human vision constructs colors,
    shapes, depths, motions, textures and objects, rules now emerging from
    psychophysical and computational studies in the cognitive sciences,
    can be read as a description, partial but mathematically precise, of
    this bootstrap. What we lose in this process are physical objects that
    exist independent of any observer. There is no sun or moon unless a
    conscious mind perceives them, for both are constructs of
    consciousness, icons in a species-specific user interface. To some
    this seems a patent absurdity, a reductio of the position, readily
    contradicted by experience and our best science. But our best science,
    our theory of the quantum, gives no such assurance. And experience
    once led us to believe the earth flat and the stars near. Perhaps, in
    due time, mind-independent objects will go the way of flat earth.

    This view obviates no method or result of science, but integrates and
    reinterprets them in its framework. Consider, for instance, the quest
    for neural correlates of consciousness (NCC). This holy grail of
    physicalism can, and should, proceed unabated if consciousness is
    fundamental, for it constitutes a central investigation of our user
    interface. To the physicalist, an NCC is, potentially, a causal source
    of consciousness. If, however, consciousness is fundamental, then an
    NCC is a feature of our interface correlated with, but never causally
    responsible for, alterations of consciousness. Damage the brain,
    destroy the NCC, and consciousness is, no doubt, impaired. Yet neither
    the brain nor the NCC causes consciousness. Instead consciousness
    constructs the brain and the NCC. This is no mystery. Drag a file's
    icon to the trash and the file is, no doubt, destroyed. Yet neither
    the icon nor the trash, each a mere pattern of pixels on a screen,
    causes its destruction. The icon is a simplification, a graphical
    correlate of the file's contents (GCC), intended to hide, not to
    instantiate, the complex web of causal relations.

                                                   |[376]back to contents|
    ______________________________________________________________________

    [377]DENIS DUTTON
    Philosopher of Art, University of Canterbury, New Zealand; Editor,
    Arts & Letters Daily

    [dutton100.jpg] In a 1757 essay, philosopher David Hume argued that
    because "the general principles of taste are uniform in human nature"
    the value of some works of art might be essentially eternal. He
    observed that the "same Homer who pleased at Athens and Rome two
    thousand years ago, is still admired at Paris and London." The works
    that manage to endure over millennia, Hume thought, do so precisely
    because they appeal to deep, unchanging features of human nature.

    Some unique works of art, for example, Beethoven's Pastoral Symphony,
    possess this rare but demonstrable capacity to excite the human mind
    across cultural boundaries and through historic time. I cannot prove
    it, but I think a small body of such works--by Homer, Bach,
    Shakespeare, Murasaki Shikibu, Vermeer, Michelangelo, Wagner, Jane
    Austen, Sophocles, Hokusai--will be sought after and enjoyed for
    centuries or millennia into the future. As much as fashions and
    philosophies are bound to change, these works will remain objects of
    permanent value to human beings.

    These epochal survivors of art are more than just popular. The
    majority of works of popular art today are not inevitably shallow or
    worthless, but they tend to be easily replaceable. In the modern mass
    art system, artistic forms endure, while individual works drop away.
    Spy thrillers, romance novels, pop songs, and soap operas are daily
    replaced by more thrillers, romance novels, pop songs, and soap
    operas. In fact, the ephemeral nature of mass art seems more
    pronounced than ever: most popular works are incapable of surviving
    even a year, let alone a couple of generations. It's different with
    art's classic survivors: even if they began, as Sophocles' and
    Shakespeare's did, as works of popular art, they set themselves apart
    in their durable appeal: nothing kills them. Audiences keep coming
    back to experience these original works themselves.

    Against the idea of permanent aesthetic values is cultural relativism,
    which is taught as the default orthodoxy in many university
    departments. Aesthetic values have been widely construed by academics
    as merely contingent reflections of local social and economic
    conditions. Beauty, if not in the eye of the beholder, has been
    misconstrued as merely in the eyes of society, a conditioning that
    determines values of cultural seeing. Such veins of explanation often
    include no small amount of cynicism: why do people go to the opera?
    Oh, to show off their furs. Why are they thrilled by famous paintings?
    Because they're worth millions. Beneath such explanations is a denial
    of intrinsic aesthetic merit.

    Such aesthetic relativism is decisively refuted, as Hume understood,
    by the cross-cultural appeal of a small class of art objects over
    centuries: Mozart packs Japanese concerts halls, as Hiroshige does
    Paris galleries, while new productions of Shakespeare in every major
    language of the world are endless. And finally, it is beginning to
    look as though empirical psychology is equipped to address the
    universality of art. For example, evolutionary psychology is being
    used by literary scholars to explain the persistent themes and plot
    devices in fiction. The rendering of faces, bodies, and landscape
    preferences in art is amenable to psychological investigation. The
    structure of musical perception is now open to experimental analysis
    as never before. Poetic experience can be elucidated by the insights
    of contemporary linguistics. None of this research promises a recipe
    for creating great art, but it can throw light on what we already know
    about aesthetic pleasure.

    What's going on most days in the Metropolitan Museum and most nights
    at Lincoln Center involves aesthetic experiences that will be
    continuously revived and relived by our descendents into an indefinite
    future. In a way, this makes the creations of the greatest artists as
    much permanent achievements as the discoveries of greatest scientists.
    That much I think I know. The question we should now ask is, What
    makes this possible? What is it about the highest works of art that
    gives them eternal appeal?

                                                   |[378]back to contents|
    ______________________________________________________________________

    [379]DAVID MYERS
    Psychologist, Hope College; Author, Intuition

    [myers100.jpg] As a Christian monotheist, I start with two unproven
    axioms:

      1. There is a God.
      2. It's not me (and it's also not you).

    Together, these axioms imply my surest conviction: that some of my
    beliefs (and yours) contain error. We are, from dust to dust, finite
    and fallible. We have dignity but not deity.
    And that is why I further believe that we should

      a) hold all our unproven beliefs with a certain tentativeness
      (except for this one!),

      b) assess others' ideas with open-minded skepticism, and

      c) freely pursue truth aided by observation and experiment.

    This mix of faith-based humility and skepticism helped fuel the
    beginnings of modern science, and it has informed my own research and
    science writing. The whole truth cannot be found merely by searching
    our own minds, for there is not enough there. So we also put our ideas
    to the test. If they survive, so much the better for them; if not, so
    much the worse.
    Within psychology, this "ever-reforming" process has many times
    changed my mind, leading me now to believe, for example, that newborns
    are not so dumb, that electro convulsive therapy often alleviates
    intractable depression, that America's economic growth has not
    improved our morale, that the automatic unconscious mind dwarfs the
    conscious mind, that traumatic experiences rarely get repressed, that
    most folks don't suffer low self-esteem, and that sexual orientation
    is not a choice.

                                                   |[380]back to contents|
    ______________________________________________________________________

    [381]ESTHER DYSON
    Editor of Release 1.0; Trustee, Long Now Foundation; Author, Release
    2.0

    [dysone100.jpg] We're living longer, and thinking shorter.

    [Disclaimer: Since I'm not a scientist, I'm not even going to attempt
    to take on something scientific. Rather, I want to talk about
    something that can't easily be measured, let alone proved.

    And second, though what I'm saying may sound gloomy, I love the times
    we live in. There has never been a time more interesting, more full of
    things to explain, interesting people to meet, worthy causes to
    support, challenging problems to solve.]

    It's all about time.

    I think modern life has fundamentally and paradoxically changed our
    sense of time. Even as we live longer, we seem to think shorter. Is it
    because we cram more into each hour? Or because the next person over
    seems to cram more into each hour?

    For a variety of reasons, everything is happening much faster and more
    things are happening. Change is a constant.

    It used to be that machines automated work, giving us more time to do
    other things. But now machines automate the production of
    attention-consuming information, which takes our time. For example, if
    one person sends the same e-mail message to 10 people, then 10 people
    have to respond.

    The physical friction of everyday life--the time it took Isaac Newton
    to travel by coach from London to Cambridge, the dead spots of walking
    to work (no iPod), the darkness that kept us from reading--has
    disappeared, making every minute not used productively into an
    opportunity cost.

    And finally, we can measure more, over smaller chunks of time. From
    airline miles to calories (and carbs and fat grams), from friends on
    Friendster to steps on a pedometer, from realtime stock prices to
    millions of burgers consumed, we count things by the minute and the
    second.

    Unfortunately, this carries over into how we think and plan:
    Businesses focus on short-term results; politicians focus on
    elections; school systems focus on test results; most of us focus on
    the weather rather than the climate. Everyone knows about the big
    problems, but their behavior focuses on the here and now.

    I first noticed this phenomenon in a big way in the US right after
    9/11, when it became impossible to schedule an appointment or get
    anyone to make a commitment. To me, it felt like Russia (where I had
    been spending time since 1989), where people avoided long-term plans
    because there was little discernible relationship between effort and
    result. Suddenly, even in the US, people were behaving like the
    Russians of those days, reluctant to plan for anything more than a few
    days out.

    Of course, that immediate crisis has passed, but there's still the
    same sense of unpredictability dogging our thinking in the US (in
    particular). Best to concentrate on the current quarter, because who
    knows what job I'll have next year. Best to pass that test, because
    what I actually learn won't be worth much ten years from now anyway.

    How can we reverse this?

    It's a social problem, but I think it may also herald a mental
    one--which I describe as mental diabetes.

    Whatever's happening to adults, most of us grew up reading books (at
    least occasionally) and playing with "uninteractive" toys that
    required us to make up our own stories, dialogue and behavior for
    them. Today's children are living in an information-rich,
    time-compressed environment that often seems to replace a child's
    imagination rather than stimulate it. I posit that being fed so much
    processed information--video, audio, images, flashing screens, talking
    toys, simulated action games--is akin to being fed too much processed,
    sugar-rich food. It may seriously mess up children's information
    metabolism and their ability to process information for themselves. In
    other words, will they be able to discern cause and effect, to put
    together a coherent story line, to think scientifically?

    I don't know the answers, but these questions are worth thinking
    about, for the long term.

                                                   |[382]back to contents|
    ______________________________________________________________________

    [383]DAVID BUSS
    Psychologist, University of Texas, Austin; Author, The Evolution of
    Desire

    [buss100.jpg] True love.

    I've spent two decades of my professional life studying human mating.
    In that time, I've documented phenomena ranging from what men and
    women desire in a mate to the most diabolical forms of sexual
    treachery. I've discovered the astonishingly creative ways in which
    men and women deceive and manipulate each other. I've studied mate
    poachers, obsessed stalkers, sexual predators, and spouse murderers.
    But throughout this exploration of the dark dimensions of human
    mating, I've remained unwavering in my belief in true love.
    While love is common, true love is rare, and I believe that few people
    are fortunate enough to experience it. The roads of regular love are
    well traveled and their markers are well understood by many--the
    mesmerizing attraction, the ideational obsession, the sexual
    afterglow, profound self-sacrifice, and the desire to combine DNA. But
    true love takes its own course through uncharted territory. It knows
    no fences, has no barriers or boundaries. It's difficult to define,
    eludes modern measurement, and seems scientifically wooly. But I know
    true love exists. I just can't prove it.

                                                   |[384]back to contents|
    ______________________________________________________________________

    [385]MARIA SPIROPULU
    Physicist, currently at CERN

    [spiropulu100.jpg] I believe nothing to be true (clearly real) if it
    cannot be proved.
    I'll take the question and make a pseudo-invariant transformation that
    makes it more apt to my brain. When Bohr was asked what is the
    complementary variable of "truth" (Wirklichkeit) he replied with no
    hesitation "clarity" (Klarheit). Contrary to Bohr, and since neither
    truth nor clarity are quantum mechanical variables, real truth and
    comprehensive clarity should be simultaneously achievable given
    rigorous experimental evidence. [In particular since "Wirklichkeit"
    means reality, and "Klarheit" is clarity in the sense of good
    understanding.]

    In fact I will use clarity (as in "clear reality"), in the place of
    truth.

    I will also invent equivalents for proof and for belief. Proof will be
    interchangeable with "experimental scientific evidence". Belief is
    more tricky given that it has to do with complex carbonic life. It can
    be interchangeable with "theoretical assessment" or "assessment by
    common sense" (depending on the scale and the available technology).
    In this process (no doubt in a path full of traps and pitfalls) I have
    cannibalized the original question to the following:

      What do you (commonsensical/theoretically) assess to be clearly
      real even though you have no experimental scientific evidence for
      it?

    Now this is hard: there are many theoretical assessments for the
    explanation of the natural phenomena at the extreme energy scales
    (from the subnuclear to the supercosmic), that possess a degree of
    clarity. But all of them are inspired by the vast collection of
    conciliatory data that scale by scale speak of Nature's works. This is
    so even for string theory.

    So the answer is still...nothing.

    Following Bohr's complementarity I would spot that belief and proof
    are in some way complementary: if you believe you don't need proof,
    and (arguably) if you have proof you don't need to believe.(I would
    assign the hard-core string theorists who do not really care about
    experimental scientific evidence in the first category).

    But Edge wants us to identify the equivalent(s) of the general theory
    of relativity in today's scientific thinking(s). Or a prediction of
    what are the big things in science that come at us unexpectedly. In my
    field, even frameworks that explain the world using extra dimensions
    of space (in extreme versions) are not unexpected. As a matter of fact
    we are preparing to discover or exclude them using the data. My hunch
    (and wish) is that in the laboratory we will be able to segment
    spacetime so finely that gravity will be studied and understood in a
    controlled environment, and that gravitational particle physics will
    be a new field.

                                                   |[386]back to contents|
    ______________________________________________________________________

    [387]J. CRAIG VENTER
    Genomics Researcher; Founder & President, J. Craig Venter Science
    Foundation

    [venter100.jpg] Life is ubiquitous throughout the universe. Life on
    our planet earth most likely is the result of a panspermic event (a
    notion popularized by the late Francis Crick).

    DNA, RNA and carbon based life will be found wherever we find water
    and look with the right tools. Whether we can prove life happens,
    depends on our ability to improve remote sensing and to visit faraway
    systems. This will also depend on whether we survive as a species for
    a sufficient period of time. As we have seen recently in the shotgun
    sequencing of the Sargasso Sea, when we look for life here on Earth
    with new tools of DNA sequencing we find life in abundance in the
    microbial world. In sequencing the genetic code of organisms that
    survive in the extremes of zero degrees C to well over boiling water
    temperatures we begin to understand the breadth of life, including
    life that can thrive in extremes of caustic conditions of strong acids
    to basic pH's that would rapidly dissolve human skin. Possible
    indicators of panspermia are the organisms such as Deinococcus
    radiodurans, which can survive millions of RADs of ionizing radiation
    and complete desiccation for years or perhaps millennia. These
    microbes can repair any DNA damage within hours of being reintroduced
    into an aqueous environment.

    Our human centric view of life is clearly unwarranted. From the
    millions of genes that we have just discovered in environmental
    organisms over the past months we learn that a finite number of themes
    are used over and over again and could have easily evolved from a few
    microbes arriving on a meteor or on intergalactic dust. Panspermia is
    how life is spreads throughout the universe and we are contributing to
    it from earth by launching billions of microbes into space.

                                                   |[388]back to contents|
    ______________________________________________________________________

    [389]STEPHEN PETRANEK
    Editor-in-Chief, Discover Magazine

    [petranek100.jpg] I believe that life is common throughout the
    universe and that we will find another Earth-like planet within a
    decade.

    The mathematics alone ought to be proof to most people (billions of
    galaxies with billions of stars in each galaxy and around most of
    those stars are planets). The numbers suggest that for life not to
    exist elsewhere in the universe is the unlikely scenario. But there is
    more to this idea than a good chance. We've now found more than 130
    planets just looking at nearby stars in our tiny little corner of the
    Milky Way. The results suggest there are uncountable numbers of
    planets in our galaxy alone. Some of them are likely to be earthlike,
    or at least earth-sized, although the vast majority that we've found
    so far are huge gas giants like Jupiter and Saturn which are unlikely
    to harbor life. Furthermore, there were four news events this year
    that made the discovery of life elsewhere extraordinarily more likely.

    First, the NASA Mars Rover called Opportunity found incontrovertible
    evidence that a briny--salty-sea once covered the area where it
    landed, called Meridiani Planum. The only question about life on Mars
    now is whether that sea--which was there twice in Martian
    history--existed long enough for life to form. The Phoenix mission in
    2008 may answer that question.

    Second, a team of astrophysicists reported in July that radio
    emissions from Sagittarius B2, a nebula near the center of the Milky
    Way, indicate the presence of aldehyde molecules, the prebiotic stuff
    of life. Aldehydes help form amino acids, the fundamental components
    of proteins. The same scientists previously reported clouds of other
    organic molecules in space, including glycolaldehyde, a simple sugar.
    Outer space is thus full of complex molecules--not just
    atoms--necessary for life. Comets in other solar systems could easily
    deposit such molecules on planets, as they may have done in our solar
    system with earth.
    Third, astronomers in 2004 found much smaller planets around other
    stars for the first time. Barbara McArthur at the University of Texas
    at Austin found a planet 18 times the mass of Earth around 55 Cancri,
    a star with three other known planets. A team in Portugal announced
    finding a 14-mass planet. These smaller planets are likely to be rock,
    not gas. McArthur says, "We're on our way to finding an extrasolar
    earth."

    Fourth, astronomers are not only getting good at finding new planets
    around other stars, they're getting the resolution of the newest
    telescopes so good that they can see the dim light from some newly
    found planets. Meanwhile, even better telescopes are being built, like
    the large binocular scope on Mt. Graham in Arizona that will see more
    planets. With light we can analyze the spectrum a new planet reflects
    and determine what's on that planet--like water. Water, we also
    discovered recently is abundant in space in large clouds between and
    near stars.

    So everything life needs is out there. For it not to come together
    somewhere else as it did on earth is remarkably unlikely. In fact,
    although there are Goldilocks zones in galaxies where life as we know
    it is most likely to survive (there's too much radiation towards the
    center of the Milky Way, for example), there are almost countless
    galaxies out there where conditions could be ripe for life to evolve.
    This is a golden age of astrophysics and we're going to find life
    elsewhere.

                                                   |[390]back to contents|
    ______________________________________________________________________

    [391]SIMON BARON-COHEN
    Psychologist, Autism Research Centre, Cambridge University; Author,
    The Essential Difference
    [baroncohen100.jpg] I am not interested in ideas that cannot in
    principle be proven or disproven. I am as capable as the next guy in
    believing in an idea that is not yet proven so long as it could in
    principle be proven or disproven.
    In my chosen field of autism, I believe that the cause will turn out
    to be assortative mating of two hyper-systemizers. I believe this
    because we already have 3 pieces of the jig-saw: (1) that fathers of
    children with autism are more likely to work in the field of
    engineering (compared to fathers of children without autism); (2) that
    grandfathers of children with autism--on both sides of the
    family--were also more likely to work in the field of engineering
    (compared to grandfathers of children without autism); and (3) that
    both mothers and fathers of children with autism are super-fast at the
    embedded figures test, a task requiring analysis of patterns and
    rules. (Note that engineering is a chosen example because it involves
    strong systemizing. But other related scientific and technical fields
    [such as math or physics] would have been equally good examples to
    study).
    We have had these three pieces of the jigsaw since 1997, published in
    the scientific literature. They do not yet prove the assortative
    mating theory. They simply point to it being highly likely. Direct
    tests of the theory are still needed. I will be the first to give up
    this idea if it is proven wrong, since I'm not in the business of
    holding onto wrong ideas. But I won't give up the idea simply because
    it will be unpopular to certain groups (such as those who want to
    believe that the cause of autism is purely environmental). I will hold
    onto the idea until it has been properly tested. Popperian science is
    about being able to let go of an idea when the evidence goes against
    it, but it is also about being able to hold onto an idea until the
    evidence has been collected, if you have enough reasons to believe it
    might be true.
    The causes of autism are likely to be complex, including at the very
    least multiple genes interacting with environmental factors, but the
    assortative mating theory may describe some contributing factors.

                                                   |[392]back to contents|
    ______________________________________________________________________

    [393]TOM STANDAGE
    Technology Editor, The Economist
    [standage100.jpg] I believe that the radiation emitted by mobile
    phones is harmless.

    My argument is not based so much on the scientific evidence--because
    there isn't very much of it, and what little there is has either found
    no effect or is statistically dubious. Instead, it is based on a
    historical analogy with previous scares about overhead power lines and
    cathode-ray computer monitors (VDUs). Both were also thought to be
    dangerous, yet years of research--decades in the case of power
    lines--failed to find conclusive evidence of harm.

    Mobile phones seem to me to be the latest example of what has become a
    familiar pattern: anecdotal evidence suggests that a technology might
    be harmful, and however many studies fail to find evidence of harm,
    there are always calls for more research.
    The underlying problem, of course, is the impossibility of proving a
    negative. During the fuss over genetically modified crops in Europe,
    there were repeated calls for proof that GM technology was safe.
    Similarly, in the aftermath of the BSE scare in Britain, scientists
    were repeatedly asked for proof that beef was safe to eat. But you
    cannot prove that something has no effect: absence of evidence is not
    evidence of absence. All you can do is look for evidence of harm. If
    you don't find it, you can look again. If you still fail to find it,
    the question is still open: "lack of evidence of harm" means both
    "safe as far as we can tell" and "we still don't know if it's safe or
    not". Scientists are often unfairly accused of logic-chopping when
    they point this out.
    Looking back even further, I expect mobile phones will turn out to be
    merely the latest in a long line of technologies that raised health
    concerns that subsequently turned out to be unwarranted. In the 19th
    century, long before the power-line and VDU scares, telegraph wires
    were accused of affecting the weather, and railway travel was believed
    to cause nervous disorders.
    The irony is that since my belief that mobile phones are safe is based
    on a historical analysis, I am on no firmer ground scientifically than
    those who believe mobile phones are harmful. Still, I believe they are
    safe, though I can't prove it.

                                                   |[394]back to contents|
    ______________________________________________________________________

    [395]LEON LEDERMAN
    Physicist and Nobel Laureate; Director Emeritus, Fermilab; Coauthor,
    The God Particle
    [lederman100.jpg] My friend, the theoretical physicist, believed so
    strongly in String Theory, "It must be true!" He was called to testify
    in a lawsuit, which contested the claims of String Theory against
    Quantum Loop Gravity. The lawyer was skeptical. "What makes you such
    an authority?" he asked. "Oh, I am without question the world's most
    outstanding theoretical physicist", was the startling reply. It was
    enough to convince the lawyer to change the subject. However, when the
    witness came off the stand, he was surrounded by protesting
    colleagues.
    "How could you make such an outrageous claim?" they asked. The
    theoretical physicist defended, "Fellows, you just don't understand; I
    was under oath."
    To believe without knowing it cannot be proved (yet) is the essence of
    physics. Guys like Einstein, Dirac, Poincaré, etc. extolled the beauty
    of concepts, in a bizarre sense, placing truth at a lower level of
    importance. There are enough examples that I resonated with the
    arrogance of my theoretical masters who were in effect saying that
    God, a.k.a. the Master, Der Alte, may have, in her fashioning of the
    universe, made some errors in favoring of a convenient truth over a
    breathtakingly wondrous mathematics. This inelegant lack of confidence
    has heretofore always proved hasty. Thus, when the long respected law
    of mirror symmetry was violated by weakly interacting but exotic
    particles, our pain at the loss of simplicity and harmony was greatly
    alleviated by the discovery of the failure of particle-antiparticle
    symmetry. The connection was exciting because the simultaneous
    reflection in a mirror and change of particles to antiparticles seemed
    to restore a new and more powerful symmetry--"CP" symmetry now gave us
    a connection of space (mirror reflection) and electric charge. How
    silly of us to have lost confidence in the essential beauty of nature!
    The renewed confidence remained even when it turned out that "CP" was
    also imperfectly respected. "Surely," we now believe, "there is in
    store some spectacular, new, unforeseen splendor in all of us." She
    will not let us down. This we believe, even though we can't prove it.

                                                   |[396]back to contents|
    ______________________________________________________________________

    [397]MICHAEL SHERMER
    Publisher, Skeptic magazine; Columnist, Scientific American; Author
    Science Friction
    [shermer100.jpg] I believe, but cannot prove...that reality exists
    over and above human and social constructions of that reality. Science
    as a method, and naturalism as a philosophy, together form the best
    tool we have for understanding that reality. Because science is
    cumulative--that is, it builds on itself in a progressive fashion--we
    can strive to achieve an ever-greater understanding of reality. Our
    knowledge of nature remains provisional because we can never know if
    we have final Truth. Because science is a human activity and nature is
    complex and dynamic, fuzzy logic and fractional probabilities best
    describe both nature and the estimations of our approximation toward
    understanding that nature.

    There is no such thing as the paranormal and the supernatural; there
    is only the normal and the natural and mysteries we have yet to
    explain.

    What separates science from all other human activities is its belief
    in the provisional nature of all conclusions. In science, knowledge is
    fluid and certainty fleeting. That is the heart of its limitation. It
    is also its greatest strength. There are, from this ultimate
    unprovable assertion, three additional insoluble derivatives.

      1. There is no God, intelligent designer, or anything resembling
      the divinity as proffered by the world's religions (although an
      extra-terrestrial being of significantly greater intelligence and
      power than us would be indistinguishable from God).

      After thousands of years of the world's greatest minds attempting
      to prove or disprove the divinity's existence or nonexistence, with
      little agreement or consensus amongst scholars as to the divinity's
      ultimate state of being, a reasonable conclusion is that the God
      question can never be solved and that one's belief, disbelief, or
      skepticism ultimately rests on a non-rational basis.
      2. The universe is ultimately determined, but we have free will.

      As with the God question, scholars of considerable intellectual
      power for many millennia have failed to resolve the paradox of
      feeling free in a determined universe. One provisional solution is
      to think of the universe as so complex that the number of causes
      and the complexity of their interactions make the predetermination
      of human action pragmatically impossible. We can even put a figure
      on the causal net of the universe to see just how absurd it is to
      think we can get our minds around it fully.

      It has been computed that in order for a computer in the far future
      of the universe to resurrect in a virtual reality every person who
      ever lived or could have lived, with all causal interactions
      between themselves and their environment, it would need 10 to the
      power of 10 to the power of 123 bits (a 1 followed by 10^123 zeros)
      of memory. Suffice it to say that no computer within the
      conceivable future will achieve this level of power; likewise no
      human brain even comes close.

      The enormity of this complexity leads us to feel as if we are
      acting freely as uncaused causers, even though we are actually
      causally determined. Since no set of causes we select as the
      determiners of human action can be complete, the feeling of freedom
      arises out of this ignorance of causes. To that extent we may act
      as if we are free. There is much to gain, little to lose, and
      personal responsibility follows.
      3. Morality is the natural outcome of evolutionary and historical
      forces, not divine command.
      The moral feelings of doing the right thing (such as virtuousness)
      or doing the wrong thing (such as guilt) were generated by nature
      as part of human evolution.

      Although cultures differ on what they define as right and wrong,
      the moral feelings of doing the right or wrong thing are universal
      to all humans. Human universals are pervasive and powerful, and
      include at their core the fact that we are, by nature, moral and
      immoral, good and evil, altruistic and selfish, cooperative and
      competitive, peaceful and bellicose, virtuous and non-virtuous.
      Individuals and groups vary on the expression of such universal
      traits, but everyone has them. Most people, most of the time, in
      most circumstances, are good and do the right thing for themselves
      and for others. But some people, some of the time, in some
      circumstances, are bad and do the wrong thing for themselves and
      for others.

      As a consequence, moral principles are provisionally true, where
      they apply to most people, in most cultures, in most circumstances,
      most of the time. At some point in the last 10,000 years (around
      the time of writing and the shift from bands and tribes to
      chiefdoms and states around 5,000 years ago) religions began to
      codify moral precepts into moral codes, and political states began
      to codify moral precepts into legal codes.

    In conclusion, I believe, but cannot prove...that reality exists and
    science is the best method for understanding it, there is no God, the
    universe is determined but we are free, morality evolved as an
    adaptive trait of humans and human communities, and that ultimately
    all of existence is explicable through science.
    Of course, I could be wrong...

                                                   |[398]back to contents|
    ______________________________________________________________________

    [399]JEFFREY EPSTEIN
    Money Manager and Science Philanthropist
    [epstein100.jpg]
    The great breakthrough will involve a new understanding of time...that
    moving through time is not free, and that consciousness itself will be
    seen to only be a time sensor, adding to the other sensors of light
    and space.

                                                   |[400]back to contents|
    ______________________________________________________________________

    [401]MIHALY CSIKSZENTMIHALYI
    Psychologist; Director, Quality of Life Research Center, Claremont
    Graduate University; Author, Flow

    [csik100.jpg] When I first read your question, I was sure it was a
    trick--after all, almost nothing I believe in I can prove. I believe
    the earth is round, but I cannot prove it, nor can I prove that the
    earth revolves around the sun or that the naked fig tree in the garden
    will have leaves in a few months. I can't prove quarks exist or that
    there was a Big Bang--all of these and millions of other beliefs are
    based on faith in a community of knowledge whose proofs I am willing
    to accept, hoping they will accept on faith the few measly claims to
    proof I might advance.

    But then I realized--after reading some of the early postings--that
    every one else has assumed implicitly that the "you" in: "even if you
    cannot prove it" referred not to the individual respondent, but to the
    community of knowledge--it actually stood for "one" rather than for
    "you". That everyone seems to have understood this seems to me a
    remarkable achievement, a merging of the self with the collective that
    only great religions and profound ideologies occasionally achieve.

    So what do I believe that no one else can prove? Not much, although I
    do believe in evolution, including cultural evolution, which means
    that I tend to trust ancient beliefs about good and bad, the sacred
    and the profane, the meaningful and the worthless--not because they
    are amenable to proof, but because they have been selected over time
    and in different situations, and therefore might be worthy of belief.

    As to the future, I will follow the cautious weather forecaster who
    announces: "Tomorrow will be a beautiful day, unless it rains." In
    other words, I can see all sorts of potentially wonderful developments
    in human consciousness, global solidarity, knowledge and ethics;
    however, there are about as many trends operating towards opposite
    outcomes: a coarsening of taste, reduction to least common
    denominator, polarization of property, power, and faith. I hope we
    will have the time and opportunity to understand which policies lead
    to which outcomes, and then that we will have the motivation and the
    courage to implement the more desirable alternatives.

                                                   |[402]back to contents|
    ______________________________________________________________________

    [403]LEE SMOLIN
    Physicist, Perimeter Institute; Author, Three Roads to Quantum Gravity
    [smolin100.jpg] I am convinced that quantum mechanics is not a final
    theory. I believe this because I have never encountered an
    interpretation of the present formulation of quantum mechanics that
    makes sense to me. I have studied most of them in depth and thought
    hard about them, and in the end I still can't make real sense of
    quantum theory as it stands. Among other issues, the measurement
    problem seems impossible to resolve without changing the physical
    theory.

    Quantum mechanics must then be an approximate description of a more
    fundamental physical theory. There must then be hidden variables,
    which are averaged over to derive the approximate, probabilistic
    description which is quantum theory. We know from the experimental
    falsifications of the Bell inequalities that any theory which agrees
    with quantum mechanics on a range of experiments where it has been
    checked must be non-local. Quantum mechanics is non-local, as are all
    proposals for replacing it with something that makes more sense. So
    any additional hidden variables must be non-local. But I believe we
    can say more. I believe that the hidden variables represent
    relationships between the particles we do see, which are hidden
    because they are non-local and connect widely separated particles.

    This fits in with another core belief of mine, which derives from
    general relativity, which is that the fundamental properties of
    physical entities are a set of relationships, which evolve
    dynamically. There are no intrinsic, non-relational properties, and
    there is no fixed background, such as Newtonian space and time, which
    exists just to give things properties.

    One consequence of this is that the geometry of space and time is also
    only an approximate, emergent description, applicable only on scales
    too large to see the fundamental degrees of freedom. The fundamental
    relations are non-local with respect to the approximate notion of
    locality that emerges at the scale where it becomes sensible to talk
    about things located in a geometry.

    Putting these together, we see that quantum uncertainty must be a
    residue of the resulting non-locality, which restricts our ability to
    predict the future of any small region of the universe. Hbar, the
    fundamental constant of quantum mechanics that measures the quantum
    uncertainty, is related to N, the number of degrees of freedom in the
    universe. A reasonable conjecture is that hbar is proportional to the
    inverse of the square root of N.

    But how are we to describe physics, if it is not in terms of things
    moving in a fixed spacetime? Einstein struggled with this, and my only
    answer is the one he came to near the end of his life: fundamental
    physics must be discrete, and its description must be in terms of
    algebra and combinatorics.

    Finally, what of time? I have been also unable to make sense of any of
    the proposals to do away with time as a fundamental aspect of our
    description of nature. So I believe in time, in the sense of
    causality. I also doubt that the "big bang" is the beginning of time,
    I strongly suspect that our history extends backwards before the big
    bang.

    Finally, I believe that in the near future, we will be able to make
    predictions based on these ideas that will be tested in real
    experiments.

                                                   |[404]back to contents|
    ______________________________________________________________________

    [405]JORDAN POLLACK
    Computer Scientist, Brandeis University

    [pollack100.jpg] I believe that that systems of self-interested agents
    can make progress on their own without centralized supervision.
    There is an isomorphism between evolution, economics, and education.
    In economics, the supervisor is a central government or super rich
    investor, in evolution, it is the "intelligent designer", and in
    education, its the teacher or outside examiners. In economic systems,
    despite an almost religious belief in Laissez-Faire and
    incentive-based behavior, economic systems are prone to
    winner-take-all phenomena and boom-bust cycles. They seem to require
    benevolent regulation, or "managed competition" to prevent the "rich
    get richer" dynamic leading to monopoly, which leads inevitably to
    corruption and kleptocracy. In evolution, scientists reject the
    intelligent designer as a creationist ruse, but so far our working
    models for open-ended evolution haven't worked, and prematurely
    convergence to mediocrity. In education, evidence of auto-didactic
    learning in video-games and sports is suppressed in academics by
    top-down curriculum frameworks and centralized high-stakes testing.
    If we did have a working mechanism design which could achieve
    continuous progress by decentralized self-interested agents, it would
    settle the creationist objection as well as apply to the other fields,
    leading to a new renaissance.

                                                   |[406]back to contents|
    ______________________________________________________________________

    [407]DAVID GELERNTER
    Computer Scientist, Yale University; Chief Scientist, Mirror Worlds
    Technologies; Author, Drawing Life
    [gelernter100.jpg] I believe (I know--but can't prove!) that
    scientists will soon understand the physiological basis of the
    "cognitive spectrum," from the bright violet of tightly-focused
    analytic thought all the way down to the long, slow red of low-focus
    sleep thought--also known as "dreaming." Once they understand the
    spectrum, they'll know how to treat insomnia, will understand
    analogy-discovery (and therefore creativity), and the role of emotion
    in thought--and will understand that thought takes place not only when
    you solve a math problem but when you look out the window and let your
    mind wander. Computer scientists will finally understand the missing
    mystery ingredient that made all their efforts to simulate human
    thought such naive, static failures, and turned this once-thriving
    research field into a ghost town. (Their failures were "static"
    insofar as people think in different ways at different times--your
    energetic, wide-awake mind works very differently from your tired,
    soon-to-be-sleeping mind; but artificial intelligence programs always
    "thought" in the same way all the time.)

    And scientists will understand why we can't force ourselves to fall
    asleep or to "be creative"--and how those two facts are related.
    They'll understand why so many people report being most creative while
    driving, shaving or doing some other activity that keeps the mind's
    foreground occupied and lets it approach open problems in a "low
    focus" way. In short, they'll understand the mind as an integrated
    dynamic process that changes over a day and a lifetime, but is
    characterized always by one continuous spectrum.

    Here's what we know about the cognitive spectrum: every human being
    traces out some version of the spectrum every day. You're most capable
    of analysis when you are most awake. As you grow less wide-awake, your
    thinking grows more concrete. As you start to fall asleep, you begin
    to free associate. (Cognitive psychologists have known for years that
    you begin to dream before you fall asleep.) We know also that to grow
    up intellectually means to trace out the cognitive spectrum in
    reverse: infants and children think concretely; as they grow up,
    they're increasingly capable of analysis. (Not incidentally, newborns
    spend nearly all their time asleep.)

    Here's what we suspect about the cognitive spectrum: as you move
    down-spectrum, as your thinking grows less analytic and more concrete
    and finally bottoms on the wholly non-logical, highly concrete type of
    thought we call dreaming, emotions function increasingly as the "glue"
    of thought. I can't prove (but I believe) that "emotion coding"
    explains the problem of analogy. Scientists and philosophers have
    knocked their head against this particular brick wall for years: how
    can people say "a brick wall and a hard problem seem wholly different
    yet I can draw an analogy between them?" If we knew that, we'd
    understand the essence of creativity. The answer is: we are able to
    draw an analogy between two seemingly unlike things because the two
    are associated in our minds with the same emotion. And that emotion
    acts as a connecting bridge between them. Each memory comes with a
    characteristic emotion; similar emotions allow us to connect two
    otherwise-unlike memories. An emotion (NB!) isn't the crude, simple
    thing we make it out to be in speaking or writing--"happy," "sad,"
    etc.; an emotion can be the delicate, complex, nuanced, inexpressible
    feeling you get on the first warm day in spring.

    And here's what we don't know: what's the physiological mechanism of
    the cognitive spectrum? What's the genetic basis? Within a generation,
    we'll have the answers.

                                                   |[408]back to contents|
    ______________________________________________________________________

    [409]JOHN HORGAN
    Science Writer; Author, Rational Mysticism

    [horgan100.jpg] I believe neuroscientists will never have enough
    understanding of the neural code, the secret language of the brain, to
    read peoples' thoughts without their consent.

    The neural code is the software, algorithm, or set of rules whereby
    the brain transforms raw sensory data into perceptions, memories,
    decisions, meanings. A complete solution to the neural code could, in
    principle, allow scientists to monitor and manipulate minds with
    exquisite precision; you might, for example, probe the mind of a
    suspected terrorist for memories of past attacks or plans for future
    ones. The problem is, although all brains operate according to certain
    general principles, each person's neural code is to a certain extent
    idiosyncratic, shaped by his or her unique life history.

    The neural pattern that underpins my concept of "George Bush" or
    "Heathrow Airport" or "surface-to-air missile" differs from yours. The
    only way to know how my brain encodes this kind of specific
    information would be to monitor its activity--ideally with thousands
    or even millions of implanted electrodes, which can detect the chatter
    of individual neurons--while I tell you as precisely as possible what
    I am thinking. But data you glean from studying me will be of no use
    for interpreting the signals of any other person. For ill or good, our
    minds will always remain hidden to some extent from Big Brother.

                                                   |[410]back to contents|
    ______________________________________________________________________

    [411]JOHN R. SKOYLES
    Neuroscience researcher; Coauthor, Up From Dragons
    [skoyles100.jpg] Here's what I believe but cannot prove: human beings,
    like all animals, have evolved a range of capacities for fighting
    disease and recovering from injury, including a variety of 'sickness
    behaviors'; humans beings alone however have discovered the advantages
    of off-loading much of the responsibility for managing their sickness
    behaviors to other people; the result is that for human beings the
    very nature of illness has changed--human illness is now largely a
    social phenomenon.

    This is possible because "illness" is a response. A rise in body
    temperature, for example, kills many bacteria and changes the membrane
    properties of cells so viruses cannot replicate. The pain of a broken
    bone or weak heart makes sure we let it heal or rest. Nature supplied
    our bodies in this way with a first-aid kit but unfortunately like
    many medicines their "treatments" are unpleasant. That unpleasantness,
    not the dysfunction which they seek to remedy is what we call
    "illness".

    These remedies, however, have costs as well as benefits making it
    often difficult for the body to know whether to deploy them. A fever
    might fight an infection but if the body lacks sufficient energy
    stores, the fever might kill. The body therefore must make a decision
    whether the gain of clearing the infection merits the risk.
    Complicating that decision is that the body is blind, for example, to
    whether it faces a mild or a life-threatening virus. The body thus
    deploys its treatments in a precautionary manner. If only one in ten
    fevers actually clears an infection that would kill, it makes sense to
    tolerate the cost of the other nine. Most of the body's capacities for
    fighting disease and repairing injury are deployed in this
    precautionary way. We feel pain in a broken limb so we treat it over
    protectively--in nine occasions out of ten we could get by with less
    protective pain but on the tenth it stops us causing it further
    injury. But precautionary deployment is costly. Evolution therefore
    has put the evaluation of such deployment under the control of the
    brain in attempt to keep their use to a minimum.

    But the brain on its own often lacks the experience to know our own
    condition. Fortunately, other people can, particularly those that have
    studied health and illness.

    Human evolution therefore changed illness by offloading decisions
    about deployment whenever possible on to professionals. People that
    make themselves experienced in disease and injury, after all, have the
    background knowledge to know our bodies much better than ourselves.
    Healing professionals--healers, shamans, witch doctors and
    medics--exist in all human cultures. Of course, such professionals
    were seen by their patients as offering real treatments--and a few did
    help such as advising rest, eating well and some medicinal herbs. But
    most of what they did was ineffective. Doctors indeed had to wait
    until 1908 and Paul Ehrlich's discovery of Salvarsan for treating
    syphilis before they had a really effective treatment for a major
    disease. Nonetheless earlier doctors and healers were considered by
    themselves and their patients to be in the possession of very powerful
    cures.

    Why? The answer I believe was that their ineffective rituals and
    potions actually worked. Evolution prepared us to offload control of
    our abilities to fight disease and heal injuries to those that knew
    more than us. The rituals and quackery of healers might have not
    worked but they certainly made a patient feel they were in the hands
    of an expert. That gave a healer great power over their patient. As
    noted, many of the body's own "treatments" are used on a precautionary
    basis so they can be stopped without harm. A healer could do this by
    applying an impressive "cure" that persuaded the body that its own
    "treatments" were no longer needed. The body would trust its healer
    and halt its own efforts and so the "illness". The patient as a result
    would feel much better, if not cured. Human evolution therefore made
    doctoring more than just a science and a question of prescribing the
    right treatment. It made it also an art by which a doctor persuades
    the patient's body to offload its decision making onto them.

                                                   |[412]back to contents|
    ______________________________________________________________________

    [413]THOMAS METZINGER
    Johannes Gutenberg-Universität Mainz; Author, Being No One
    [metzinger100.jpg] I believe, but cannot prove, that a First
    Breakthrough on Consciousness is actually around the corner. "Actually
    around the corner" means: less than 50 years away. My intuition is
    that, roughly, all we need for this first breakthrough are four
    convincing stories.

    The first story will be about global integration, about the dynamical
    self-organization of long-range binding operations in the human brain.
    It will probably involve something like synchrony in multiple
    frequency bands, and will let us understand how a unified model of the
    world can emerge in our own heads.

    The second story will be about "transparency": Why is it that we are
    unable to consciously experience most of the images our brain
    generates as images? The answer to this question will give us a real
    world. The transparency-tale has to do with not being able to see
    earlier processing stages and becoming a naive realist.

    The third story will focus on the Now, the emergence of a
    psychological moment--on a deeper understanding of what William James'
    called the "specious present". Experts on short term memory and neural
    network modelers with tell this story for us. As it unfolds, it will
    explain the emergence of a subjective present and let us understand
    how conscious experience, in its simplest and most essential form, is
    the presence of a world.

    Interestingly, today almost everybody in the consciousness community
    already agrees on some version of the fourth story: Consciousness is
    directly linked to attentional processing, more precisely, to a hidden
    mechanism constantly holding information available for attention. The
    subjective presence of a world is a clever strategy of making
    integrated information available for attention.

    I believe, but cannot prove, that this will allow us to find the
    global neural correlate for consciousness. However, being a
    philosopher, I want much more than that--I am also interested in
    precise concepts. What I will be waiting for is the young
    mathematician who then comes along and suddenly allows us to see how
    all of these four stories were actually only one: The genius who gives
    us a formal model describing the information flow in this neural
    correlate, and in just the right way. She will harvest the fruits of
    generations or researchers before her, and this will be the First
    Breakthrough on Consciousness.

    Then three things will happen.

      1. The Second Breakthrough on Consciousness will take much longer.
      Things will get messy and complicated. The philosophy and
      neuroscience of consciousness will get bogged down in diabolic
      details and ugly technical problems. Public attention will soon
      shift away from the problem of consciousness per se. Instead, new
      generations of young researchers will now focus on the nature of
      self and social cognition.

      2. The overall development will have an unexpectedly strong
      cultural impact. People will not want to face their own mortality.
      There will be fundamentalist and anti-rational counter movements
      against the scientific image of man. At the same time crude new
      ideologies propagating vulgar forms of materialism and primitive
      forms of hedonism will spring up. Scientists will realize that one
      can not reductively explain the human mind and then simply look
      another way, leaving the consequences for someone else to deal
      with.

      3. We will be able to influence consciousness in ways we have never
      dreamt of. There will be a new form of technology--Consciousness
      Technology--exclusively focusing on how to manipulate the neural
      correlate of consciousness in ever more fine-grained, efficient,
      and risk-free ways. People will realize that we need some sort of
      applied ethics for this new type of technology. And hopefully we
      will all together start to tell a new story--a story about how to
      live with these brains and about what a good state of consciousness
      actually is.

                                                   |[414]back to contents|
    ______________________________________________________________________

    [415]JEAN PAUL SCHMETZ
    Economist; Managing Director of CyberLab Interactive Productions GmbH
    (Burda Media Group).

    [schmetz100.jpg] When considering this question one has to remember
    the basis of the scientific method: formulating hypotheses that can be
    disproved. Those hypotheses that are not disproved are thought to be
    true until disproved. Since it is more glamorous for a scientist to
    formulate hypotheses that it is to spend years disproving existing
    ones from other scientists and that it is unlikely that someone will
    spend enough time and energy trying to disprove his/her own
    statements, our body of scientific knowledge is surely full of
    statements we believe to be true but will eventually be proved to be
    false.

    So I turn the question around: What scientific ideas that have not
    been disproved, do you believe are false.

    In my field (theoretical economics), I believe that most ideas taught
    in economics 101 will be proved false eventually. Most of them would
    already have been officially defined as false in any other more
    hard-science, but, because of lack of better hypotheses they are still
    widely accepted and used in economics and general commentary.
    Eventually, someone will come up with another type of hypotheses
    explaining (and predicting) the economic reality in a way that will
    render most existing economics beliefs false.

                                                   |[416]back to contents|
    ______________________________________________________________________

    [417]RICHARD DAWKINS
    Evolutionary Biologist, Oxford University; Author, The Ancestor's Tale
    [dawkins100.jpg]
    I believe that all life, all intelligence, all creativity and all
    'design' anywhere in the universe, is the direct or indirect product
    of Darwinian natural selection. It follows that design comes late in
    the universe, after a period of Darwinian evolution. Design cannot
    precede evolution and therefore cannot underlie the universe.

                                                   |[418]back to contents|
    ______________________________________________________________________

    [419]ALEX (SANDY) PENTLAND
    Computer Scientist, MIT Media Laboratory

    [pentland100.jpg] Tribal Mind
    What would it be like to be part of a distributed intelligence but
    still with an individual consciousness? Well for starters, you might
    expect to see the collective mind 'take over' from time to time,
    directly guiding the individual minds. In humans, the behavior of
    angry mobs and frightened crowds seem to qualify as examples of a
    'collective mind' in action, with non-linguistic channels of
    communication usurping the individual capacity for rational behavior.
    But as powerful as this sort of group compulsion can be, it is usually
    regarded as simply a failure of individual rationality, a primitive
    behavioral safety net for the tribe in times of great stress. Surely
    this tribal mind doesn't operate in normal day-to-day behavior--or
    does it? If we imagined that human behavior was in substantial part
    due to a collective tribal mind, you would expect that non-linguistic
    social signaling--the type that drives mob behavior--would be
    predictive of even the most rational and important human interactions.
    Analogous with the wiggle dance of the honeybee, there ought to be
    non-linguistic signals that accurately predict important behavioral
    outcomes.
    And that is exactly what I find. Together with my research group I
    have built a computer system that objectively measures a set of
    non-linguistic social signals, such as engagement, mirroring,
    activity, and stress, by looking at 'tone of voice' over one minute
    time periods. Although people are largely unconscious of this type of
    behavior, other researchers (Jaffe, Chartrand and Bargh, France,
    Kagen) have shown that similar measurements are predictive of infant
    language development, judgments of empathy, depression, and even
    personality development in children. Working with colleagues, we have
    found that we can use these measurements of social signaling to
    automatically predict a wide range of important behavioral
    outcomes--objective, instrumental, and subjective--with high accuracy,
    accounting for between 30% and 80% of the total outcome variance.
    Examples of objective and instrumental behaviors where we can
    accurately predict the outcome include salary negotiations, dating
    decisions, and role in the social network. Examples of subjective
    predictions include hiring preferences, empathy perceptions, and
    interest ratings. Even for lengthy interactions, accurate predictions
    can be made by observing only the initial few minutes of interaction,
    even though the linguistic content of these 'thin slices' of the
    behavior seem to have little predictive power.
    I find all of this astounding. We are examining some of the most
    important interactions a human has: finding a mate, getting a job,
    negotiating a salary, finding your place in your social network. These
    are activities for which we prepare intellectually and strategically
    for decades. And yet the largely unconscious social signaling that
    occurs at the start of the interaction appears to be more predictive
    than either the contextual facts (is he attractive? is she
    experienced?) or the linguistic structure (e.g., strategy chosen,
    arguments employed, etc.).
    So what is going on here? One might speculate that the social
    signaling we are measuring evolved as a method of establishing tribal
    hierarchy and cohesion, analogous to Dunbar's view that language
    evolved as grooming behavior. On this view the tribal mind would
    function as unconscious collective discussion about relationships and
    resources, risks and rewards, and would interact with the conscious
    individual minds by filtering ideas by their value relative to the
    tribe. Our measurements tap into the discussion, and predict outcome
    by use of social regularities. For instance, in a salary negotiation
    it is important for the lower-status individual to establish that they
    are 'team player' by being empathetic, while in a potential dating
    situation the key variable is the female's level of interest. In our
    data there seem to be patterns of signaling that reliably lead to
    these desired states.
    One question to ask about this social signaling is whether or not it
    is an independent channel of communication, e.g., is it causal or do
    the signals arise from the linguistic structure? We don't have the
    full answer to that yet, but we do know that similar measurements
    predict infant language and personality development, that adults can
    change their signaling by adopting different roles or identities
    within a conversation, and in our studies the linguistic and factual
    content seems uncorrelated with the pattern or intensity of social
    signaling. So even if social signaling turns out to be only an adjunct
    to normal linguistic structure, it is a very interesting addition: it
    is a little like having speech annotated with speaker intent!
    So here is what I suspect but can not prove: a very large proportion
    of our behavior is determined by largely unconscious social signaling,
    which sets the context, risk, and reward structure within which
    traditional cognitive processes proceed. This conjecture resonates
    with Pinker's view about brain complexity, and with Kosslyn's thoughts
    about social prosthetic systems. It is also provides a concrete
    mechanism for the well-known processes of group polarization ('the
    risky shift'), groupthink, and the sometimes irrational behaviors of
    larger groups. In short, it may be useful to starting thinking of
    humans as having a collective, tribal mind in addition to their
    personal mind.

                                                   |[420]back to contents|
    ______________________________________________________________________

    [421]JARON LANIER
    Computer Scientist and Musician
    [jaron100.jpg] My career has been guided by just the sort of unproven
    guess this year's question seeks.
    My belief is that the potential for expanded communication between
    people far exceeds the potential both of language as we think of it
    (the stuff we say, read and write) and of all the other communication
    forms we already use.
    Suppose for a moment that children in the future will grow up with an
    easy and intimate virtual reality technology and that their use of it
    will become focused on invention and design instead of the consumption
    of pre-created holo-video games, surround movies, or other content.
    Maybe these future children will play virtual musical instrument-like
    things that cause simulated trees and spiders and seasons and odors
    and ecologies to spring up just as manipulating a pencil causes words
    to appear on a page. If people grew up with a virtuosic ability to
    improvise the contents of a shared virtual world, a new sort of
    communication might also appear.
    It's barely possible to imagine what a "reality conversation" would be
    like. Each person would be changing the shared world at the speed of
    language, all at once, an image that suggests chaos, but often there
    would be a coherence, which would indicate meaning. A kid becomes a
    monster, eats his little brother, who becomes a vitriolic turd, and so
    on.
    This is what I've called "Post-symbolic communication," though really
    it won't exist in isolation of or in opposition to symbolic
    communication techniques. It will be something different, however, and
    will expand what people can mean to each other.
    Post-symbolic communication will be like a shared, waking state,
    intentional dream. Instead of the word "house", you will express a
    particular house and be able to walk into it, and instead of the
    category "house" you will peer into an apparently small bucket that is
    big enough inside to hold all the universe's houses so you can assess
    what they have in common directly. It will be a fluid form of
    experiential concreteness providing similar but divergent expressive
    power to that of abstraction.
    Why care? The acquisition of post-symbolic communication will be a
    centuries-long adventure, an expansion of meaning, something
    beautiful, and a way to seek cool, advanced technology that focuses on
    connection instead of mere power. It will be a form of beauty which
    also enhances survivability; Since the drive for "cool tech" is
    unstoppable, the invention of provocative cool tech that is lovely
    enough to seduce the attention of young smart men away from arms races
    is a prerequisite to the survival of the species.
    Some of the examples above (houses, spiders) are of people improvising
    the external environment, but post-symbolic communication might
    typically look a lot more like people morphing themselves into varied
    forms. Experiments have already been conducted with kids wearing
    special body suits and goggles "turning into" triangles to learn
    trigonometry, or molecules to learn chemistry.
    It's not only the narcissism of the young (and not so young) human
    mind or the primality of the control of one's own body that makes
    self-transformation compelling. Evolution, as generous as she ended up
    being with us humans, was stingy with potential means of expression.
    Compare us with the mimic octopus which can morph into all sorts of
    creatures and objects, and can animate its skin. An advanced
    civilization of cephalopods might develop words as we know them, but
    probably only as an adjunct to a natural form of post-symbolic
    communication.
    We humans can control precious little of the world with enough agility
    to keep up with our thoughts and feelings. The fingers and the tongue
    are the about it. Symbols as we know them in language are a trick, or
    what programmers call a "hack," that expands the power of little
    appendage wiggles to refer to all that we can't instantly become or
    create. Another belief: The tongue that can speak could also someday
    control fantastic forms beyond our current imaginings. (Some early
    experiments along these lines have been done, using ultrasound sensing
    through the cheek. and the results are at least not terrible.)
    While we're confessing unprovable beliefs, here's another one: The
    study of the genetic components of pecking order behavior, group
    belief cues, and clan identification leading to inter-clan hostility
    will be the core of psychology and sociology for the next few
    generations, and it will turn out we can't turn off or control these
    elements of human character without losing other qualities we love,
    like creativity. If this dark guess is correct, then the means to
    survival is to create societies with a huge variety of paths to
    success and a multitude of overlapping, intertwined clans and pecking
    orders, so that everyone can be a winner from equally valid individual
    perspectives. When the American experiment has worked best, it has
    approximated this level of variety. The virtual worlds of
    post-symbolic communication can provide the highest level of variety
    to satisfy the dangerous psychic inheritance I'm guessing we suffer as
    a species.
    Implicit in the futures I am imagining here is a solution to the
    software crisis. If children are breathing out fully realized
    creatures and skies just as they form sentences today, there must be
    software present which isn't crashing and is marvelously flexible and
    responsive, yet free of limiting pre-conceptions, which would revive
    symbolism. Can such software exist? Ah! Another belief! My guess is it
    can exist, but not anytime soon. The only two good examples of
    software we have at this time are evolution and the brain, and they
    both are quite good, so why not be encouraged?
    The beliefs I chose for this response are not fundamentally
    untestable. They might be tested someday, perhaps in a few centuries.
    It's not impossible that medical progress could keep me alive long
    enough to participate in testing them, so strictly speaking I can't
    guarantee that I can't ever prove these beliefs to be true.
    There are not too many potential beliefs that could really never be
    tested by anyone ever.
    Consciousness, meaning, truth, and free will and their endless
    permutations just about complete the list. The reason philosophy is so
    much harder to talk about than science is that there's so little to
    talk about. It quickly becomes almost impossible to distinguish
    repetition from resonance.
    Proposals like post-symbolic communication, however, frame questions
    about meaning that are small enough to be fresh and useful. Am I right
    that there can be meaning outside of words, or are the
    word-as-center-of-meaning folks correct?

                                                   |[422]back to contents|
    ______________________________________________________________________

    [423]JOHN BARROW
    Cosmologist, Cambridge University; Author, The Infinite Book

    [barrow100.jpg]
    That our universe is infinite in size, finite in age, and just one
    among many. Not only can I not prove it but I believe that these
    statements will prove to be unprovable in principle and we will
    eventually hold that principle to be self-evident.

                                                   |[424]back to contents|
    ______________________________________________________________________

    [425]RAY KURZWEIL
    Inventor and Technologist; Author, The Age of Spiritual Machines

    [kurzweil.100.jpg] We will find ways to circumvent the speed of light
    as a limit on the communication of information.
    We are expanding our computers and communication systems both inwardly
    and outwardly. Our chips use every smaller feature sizes, while at the
    same time we deploy greater amounts of matter and energy for
    computation and communication (for example, we're making a larger
    number of chips each year). In one to two decades, we will progress
    from two-dimensional chips to three-dimensional self-organizing
    circuits built out of molecules. Ultimately, we will approach the
    limits of matter and energy to support computation and communication.
    As we approach an asymptote in our ability to expand inwardly (that
    is, using finer features), computation will continue to expand
    outwardly, using readily available materials on Earth such as carbon.
    But we will eventually reach the limits of the resources available on
    our planet, and will expand outwardly to the rest of the solar system
    and beyond.
    So how quickly will we be able to do this? We could send tiny
    self-replicating robots at close to the speed of light along with
    electromagnetic transmissions containing the needed software. These
    nanobots could then colonize far-away planets.
    At this point, we run up against a seemingly intractable limit: the
    speed of light. Although a billion feet per second may seem fast, the
    Universe is spread out over such vast distances that this appears to
    represent a fundamental limit on how quickly an advanced civilization
    (such as we hope to become) can spread its influence.
    There are suggestions, however, that this limit is not as immutable as
    it may appear. Physicists Steve Lamoreaux and Justin Torgerson of the
    Los Alamos National Laboratory have analyzed data from an old natural
    nuclear reactor that two billion years ago produced a fission reaction
    lasting several hundred thousand years in what is now West Africa.
    Analyzing radioactive isotopes left over from the reactor and
    comparing them to isotopes from similar nuclear reactions today, they
    determined that the physics constant "alpha" (also called the fine
    structure constant), which determines the strength of the
    electromagnetic force apparently has changed since two billion years
    ago. The speed of light is inversely proportional to alpha, and both
    have been considered unchangeable constants. Alpha appears to have
    decreased by 4.5 parts out of 108. If confirmed, this would imply that
    the speed of light has increased. There are other studies with similar
    suggestions, and there is a table top experiment now under way at
    Cambridge University to test the ability to engineer a small change in
    the speed of light.
    Of course, these results will need to be carefully verified. If true,
    it may hold great importance for the future of our civilization. If
    the speed of light has increased, it has presumably done so not just
    because of the passage of time, but because certain conditions have
    changed. This is the type of scientific insight that technologists can
    exploit. It is the nature of engineering to take a natural, often
    subtle, scientific effect, and control it with a view towards greatly
    leveraging and magnifying it. If the speed of light has changed due to
    changing circumstances, that cracks open the door just enough for the
    capabilities of our future intelligence and technology to swing the
    door widely open. That is the nature of engineering. As one of many
    examples, consider how we have focused and amplified the subtle
    properties of Bernoulli's principle (that air rushing over a curved
    surface has a slightly lower air pressure than over a flat surface) to
    create the whole world of aviation.
    If it turns out that we are unable to actually change the speed of
    light, we may nonetheless circumvent it by using wormholes (which can
    be thought of as folds of the universe in dimensions beyond the three
    visible ones) as short cuts to far away places.
    In 1935, Einstein and physicist Nathan Rosen described
    "Einstein-Rosen" bridges as a way of describing electrons and other
    particles in terms of tiny space-time tunnels. In 1955, physicist John
    Wheeler described these tunnels as "wormholes," introducing the term
    for the first time. His analysis of wormholes showed them to be fully
    consistent with the theory of general relativity, which describes
    space as essentially curved in another dimension.
    In 1988, California Institute of Technology physicists Michael Morris,
    Kip Thorne, and Uri Yertsever described in some detail how such
    wormholes could be engineered. Based on quantum fluctuation, so-called
    "empty" space is continually generating tiny wormholes the size of
    subatomic particles. By adding energy and following other requirements
    of both quantum physics and general relativity (two fields that have
    been notoriously difficult to integrate), these wormholes could in
    theory be expanded in size to allow objects larger than subatomic
    particles to travel through them. Sending humans would not be
    impossible, but extremely difficult. However, as I pointed out above,
    we really only need to send nanobots plus information, which could go
    through wormholes measured in microns rather than meters. Anders
    Sandberg estimates that a one-nanometer wormhole could transmit a
    formidable 10^69 bits per second.
    Thorne and his Ph.D. students, Morris and Yertsever, also describe a
    method consistent with general relativity and quantum mechanics that
    could establish wormholes between Earth and far-away locations quickly
    even if the destination were many light-years away.
    Physicist David Hochberg and Vanderbilt University's Thomas Kephart
    point out that shortly after the Big Bang, gravity was strong enough
    to have provided the energy required to spontaneously create massive
    numbers of self-stabilizing wormholes. A significant portion of these
    wormholes are likely to still be around, and may be pervasive,
    providing a vast network of corridors that reach far and wide
    throughout the Universe. It might be easier to discover and use these
    natural wormholes than to create new ones.
    Would anyone be shocked if some subtle ways of getting around the
    speed of light were discovered? The point is that if there are even
    subtle ways around this limit, the technological powers that our
    future human-machine civilization will achieve will discover these
    means and leverage them to great effect.

                                                   |[426]back to contents|
    ______________________________________________________________________

    [427]STEWART KAUFFMAN
    Biologist, Santa Fe Institute; Author, Investigations

    [kauffman100.jpg] Is there a fourth law of thermodynamics, or some
    cousin of it, concerning self constructing non equilibrium systems
    such as biospheres anywhere in the cosmos?

    I like to think there may be such a law.

    Consider this, the number of possible proteins 200 amino acids long is
    20 raised to the 200th power or about 10 raised to the 260th power.
    Now, the number of particles in the known universe is about 10 to the
    80th power. Suppose, on a microsecond time scale the universe were
    doing nothing other than producing proteins length 200. It turns out
    that it would take vastly many repeats of the history of the universe
    to create all possible proteins length 200. This means that, for
    entities of complexity above atoms, such as modestly complex organic
    molecules, proteins, let alone species, automobiles and operas, the
    universe is on a unique trajectory (ignoring quantum mechanics for the
    moment). That is, the universe at modest levels of complexity and
    above is vastly non-ergodic.

    Now conceive of the "adjacent possible", the set of entities that are
    one "step" away from what exists now. For chemical reaction systems,
    the adjacent possible from a set of compounds already existing (called
    the "actual" ) is just the set of novel compounds that can be produced
    by single chemical reactions among the initial "actual" set. Now, the
    biosphere has expanded into its molecular adjacent possible since 4.8
    billion years ago.

    Before life, there were perhaps a few hundred organic molecule species
    on the earth. Now there are perhaps a trillion or more. We have no law
    governing this expansion into the adjacent possible in this
    non-ergodic process. My hoped for law is that biospheres everywhere in
    the universe expand in such a way that they do so as fast as is
    possible while maintaining the rough diversity of what already exists.
    Otherwise stated, the diversity of things that can happen next
    increases on average as fast as it can.

                                                   |[428]back to contents|
    ______________________________________________________________________

    [429]GARY MARCUS
    Psychologist, New York University; Author, The Birth of the Mind

    [marcus100.jpg] If computers are made up of hardware and software,
    transistors and resistors, what are neural machines we know as minds
    made up of?

    Minds clearly are not made up of transistors and resistors, but I
    firmly believe that at least one of the most basic elements of
    computation is shared by man and machine: the ability to represent
    information in terms of an abstract, algebra-like code.

    In a computer, this means that software is made up of hundreds,
    thousands, even millions of lines that say things like IF X IS GREATER
    THAN Y, DO Z, or CALCULATE THE VALUE OF Q BY ADDING A, B, AND C. The
    same kind of abstraction seems to underlie our knowledge of
    linguistics. For instance, the famous linguistic dictum that a
    Sentence consists of a Noun Phrase plus a Verb Phrase can apply to an
    infinite number of possible nouns and verbs, not just a few familiar
    words. In its open-endedness, it is an example of mental algebra par
    excellence.

    In my lab, we discovered that even infants seem to be able to grasp
    something quite similar. For example, in the course of just two
    minutes, a seven-month-old baby can extract the ABA "grammar" inherent
    in set of made-up sentences like la ta la, ga na ga, je li je. Or the
    ABB "grammar" in sentences like la ta ta, ga na na, je li li.

    Of course, this experiment doesn't prove that there is an "algebra"
    circuit in the brain--psychological techniques alone can't do that.
    For final proof, we'll need neuroscientific techniques far more
    sophisticated than contemporary brain imaging, such that we can image
    the brain at the level of interactions between individual neurons. But
    every bit of evidence that we can collect now--from babies, from
    toddlers, from adults, from psychology and from linguistics--seems to
    confirm the idea that algebra-like abstraction is a fundamental
    component of thought.

                                                   |[430]back to contents|
    ______________________________________________________________________

    [431]KARL SABBAGH
    Writer and Television Producer; Author, The Riemann Hypothesis

    [sabbagh100.jpg] I believe it is true that if there is intelligent
    life elsewhere in the universe, of whatever form, it will be familiar
    with the same concept of counting numbers.

    Some philosophers believe that pure mathematics is human-specific and
    that it is possible for an entirely different type of mathematics to
    emerge from a different type of intelligence, a type of mathematics
    that has nothing in common with ours and may even contradict it. But
    it is difficult to think of what sort of life-form would not need the
    counting numbers. The stars in the sky are discrete points and cry out
    to be counted by beings throughout the universe, but alien life-forms
    may not have vision.
    Intelligent objects with boundaries between being and non-being surely
    want to be measured-- "I'm bigger that you", "I need a size 312
    overcoat"--but perhaps there are life-forms which don't have
    boundaries but are continuously varying density changes in some Jovian
    sea. Intelligent life might be disembodied or at least lack a discrete
    body and merely be transmitted between various points in a solid
    material matrix, so that it was impossible to distinguish one
    intelligent being from another.

    But sooner or later, whether it is to measure the passing of time, the
    magnitude of distance, the density of one Jovian being compared with
    another, numbers will have to be used. And if numbers are used, 2 + 2
    must always equal 4, the number of stars in the Pleiades brighter than
    magnitude 5.7 will always be 11 which will always be a prime number,
    and two measurements of the speed of light in any units in identical
    conditions will always be identical. Of course, the fact that I find
    it difficult to think of beings which won't need our sort of
    mathematics doesn't mean they don't exist, but that's what I believe
    without proof.

                                                   |[432]back to contents|
    ______________________________________________________________________

    [433]SCOTT ATRAN
    Anthropologist, University of Michigan; Author, In God's We Trust

    [atran.100.jpg] There is no God that has existence apart from people's
    thoughts of God. There is certainly no Being that can simply suspend
    the (nomological) laws of the universe in order to satisfy our
    personal or collective yearnings and whims--like a stage director
    called on to change and improve a play. But there is a mental
    (cognitive and emotional) process common to science and religion of
    suspending belief in what you see and take for obvious fact. Humans
    have a mental compulsion--perhaps a by-product of the evolution of a
    hyper-sensitive reasoning device to serve our passions--to situate and
    understand the present state of mundane affairs within an indefinitely
    extendable and overarching system of relations between hitherto
    unconnected elements. In any event, what drives humanity forward in
    history is this quest for non-apparent truth.

                                                   |[434]back to contents|
    ______________________________________________________________________

    [435]JESSE BERING
    Psychologist, University of Arkansas

    [bering100.jpg] In 1936, shortly after the outbreak of the Spanish
    Civil War, the moribund philosopher Miguel de Unamuno, author of the
    classic existential text Tragic Sense of Life, died alone in his
    office of heart failure at the age of 72.

    Unamuno was no religious sentimentalist. As a rector and Professor of
    Greek at the University of Salamanca, he was an advocate of
    rationalist ideals and even died a folk hero for openly denouncing
    Francisco Franco's fascist regime. He was, however, ridden with a
    'spiritual' burden that troubled him nearly all his life. It was the
    problem of death. Specifically, the problem was his own death, and
    what, subjectively, it would be "like" for him after his own death:
    "The effort to comprehend it causes the most tormenting dizziness."
    I've taken to calling this dilemma "Unamuno's paradox" because I
    believe that it is a universal problem. It is, quite simply, the
    materialist understanding that consciousness is snuffed out by death
    coming into conflict with the human inability to simulate the
    psychological state of death.

    Of course, adopting a parsimonious stance allows one to easily deduce
    that we as corpses cannot experience mental states, but this
    theoretical proposition can only be justified by a working scientific
    knowledge (i.e., that the non-functioning brain is directly equivalent
    to the cessation of the mind). By stating that psychological states
    survive death, or even alluding to this possibility, one is committing
    oneself to a radical form of mind-body dualism. Consider how bizarre
    it truly is: Death is seen as a transitional event that unbuckles the
    body from its ephemeral soul, the soul being the conscious personality
    of the decedent and the once animating force of the now inert physical
    form. This dualistic view sees the self as being initially contained
    in bodily mass, as motivating overt action during this occupancy, and
    as exiting or taking leave of the body at some point after its
    biological expiration. So what, exactly, does the brain do if mental
    activities can exist independently of the brain? After all, as John
    Dewey put it, mind is a verb, not a noun.

    And yet this radicalism is especially common. In the United States
    alone, as much as 95% of the population reportedly believes in life
    after death. How can so many people be wrong? Quite easily, if you
    consider that we're all operating with the same standard, blemished
    psychological hardware. It's tempting to argue, as Freud did, that
    it's just people's desire for an afterlife that's behind it all. But
    it would be a mistake to leave it at that. Although there is
    convincing evidence showing that emotive factors can be powerful
    contributors to people's belief in life after death, whatever one's
    motivations for rejecting or endorsing the idea of an immaterial soul
    that can defy physical death, the ability to form any opinion on the
    matter would be absent if not for our species' expertise at
    differentiating unobservable minds from observable bodies.

    But here's the rub. The materialist version of death is the ultimate
    killjoy null hypothesis. The epistemological problem of knowing what
    it is "like" to be dead can never be resolved. Nevertheless, I think
    that Unamuno would be proud of recent scientific attempts to address
    the mechanics of his paradox. In a recent study, for example, I
    reported that when adult participants were asked to reason about the
    psychological abilities of a protagonist who had just died in an
    automobile accident, even participants who later classified themselves
    as "extinctivists" (i.e., those who endorsed the statement "what we
    think of as the 'soul,' or conscious personality of a person, ceases
    permanently when the body dies") nevertheless stated that the dead
    person knew that he was dead. For example, when asked whether the dead
    protagonist knew that he was dead (a feat demanding, of course,
    ongoing cognitive abilities), one young extinctivist's answer was
    almost comical. "Yeah, he'd know, because I don't believe in the
    afterlife. It is non-existent; he sees that now." Try hard as he might
    to be a good materialist, this subject couldn't help but be a dualist.

    How do I explain these findings? Like reasoning about one's past
    mental states during dreamless sleep or while in other somnambulistic
    states, consciously representing a final state of non-consciousness
    poses formidable, if not impassable, cognitive constraints. By relying
    on simulation strategies to derive information about the minds of dead
    agents, you would in principle be compelled to "put yourself into the
    shoes" of such organisms, which is of course an impossible task. These
    constraints may lead to a number of telltale errors, namely Type I
    errors (inferring mental states when in fact there are none),
    regarding the psychological status of dead agents. Several decades
    ago, the developmental psychologist Gerald Koocher described, for
    instance, how a group of children tested on death comprehension
    reflected on what it might be like to be dead "with references to
    sleeping, feeling 'peaceful,' or simply 'being very dizzy.'" More
    recently, my colleague David Bjorklund and I found evidence that
    younger children are more likely to attribute mental states to a dead
    agent than are older children, which is precisely the opposite pattern
    that one would expect to find if the origins of such beliefs could be
    traced exclusively to cultural learning.

    It seems that the default cognitive stance is reasoning that human
    minds are immortal; the steady accretion of scientific facts may throw
    off this stance a bit, but, as Unamuno found out, even science cannot
    answer the "big" question. Don't get me wrong. Like Unamuno, I don't
    believe in the afterlife. Recent findings have led me to believe that
    it's all a cognitive illusion churned up by a psychological system
    specially designed to think about unobservable minds. The soul is
    distinctly human all right. Without our evolved capacity to reason
    about minds, the soul would never have been. But in this case, the
    proof isn't in the empirical pudding. It can't be. It's death we're
    talking about, after all.

                                                   |[436]back to contents|
    ______________________________________________________________________

    [437]IRENE PEPPERBERG
    Research Scientist, MIT School of Architecture and Planning; Author,
    The Alex Studies

    [pepperberg100.jpg] I believe, but can't prove, that human language
    evolved from a combination of gesture and innate vocalizations, via
    the concomitant evolution of mirror neurons, and that birds will
    provide the best model for language evolution.
    Work on mirror neurons over the past decade has provided intriguing
    evidence, although no solid proof, for the gestural origins of speech.
    What can be called the mirror neuron hypothesis(MNH) suggests that
    only a small re-organization of the nonhuman primate brain was needed
    to create the wiring that underlies speech acquisition/learning. What
    is missing from the MNH is a model of the development of language from
    speech; it is here that I believe that a model based on avian
    vocalizations is most valuable.
    First, some background. Passerine birds can be divided into two
    groups: the oscines, who learn their songs, and the sub-oscines, who
    have a limited number of what seem to be innately-specified songs; the
    former have a well-defined neural architectures and mechanisms for
    song acquisition; the latter lack brain structures for song
    acquisition, although they obviously have brain and vocal tract
    structures for producing song. The sub-oscines, in parallel with
    nonhuman primates, often use various activities or gestures (posture,
    numbers of repetitions of songs, feather erectness, types of flights,
    etc) to provide additional information about the meaning of their
    utterances. W. John Smith, for example, can predict a flycatchers
    actions by the combination of posture, flight, and singing pattern he
    observes. The songbirds, like human children learning language, will
    not learn their vocalizations if deafened, and need to hear, babble
    and practice songs before attaining adult competence; very recent work
    by Rose et al. demonstrate that even the syntax of their song is
    learned through early exposure to paired phrases, which are then
    combined to create the adult vocalizations. Such data, demonstrating
    how sparrows integrate information about temporally-related events and
    how they use that information to develop sequential vocal behavior, is
    a viable model for human syntax acquisition.
    Now, no one knows if any birds have any mirror neurons, and how their
    mirror neurons would function if they did exist; some neural data on
    responses to self-song provide intriguing hints but go no further. I
    predict (a) the existence of such neurons in oscines and (b) that such
    neurons will have a robust role in oscine song development, but (c)
    that only more primitively-functioning mirror neurons (akin to the
    differences separating monkey and human MNs) will be found in
    sub-oscines.
    Now, what about the so-called missing link between learned and
    unlearned vocal behavior? No one has found such a missing link in the
    primate line. But Donald Kroodsma has recently discovered a flycatcher
    (a supposedly sub-oscine bird) that apparently learns its song. The
    song is simple, but has variations among groups of birds that
    constitute dialects. No one yet knows if these birds have brain
    mechanisms for song learning, or what these mechanisms might be. But I
    predict that Kroodsma's flycatchers will have mirror neurons that
    function in intermediate manner, between those of the oscines and
    sub-oscines, and will provide a model for the missing link between
    nonhuman primate and human communication.

                                                   |[438]back to contents|
    ______________________________________________________________________

    [439]NASSIM NICHOLAS TALEB
    Mathematical trader; Author, Fooled By Randomness

    [taleb100.jpg] We are good at fitting explanations to the past, all
    the while living in the illusion of understanding the dynamics of
    history.

    My claim is about the severe overestimation of knowledge in what I
    call the " ex post" historical disciplines, meaning almost all of
    social science (economics, sociology, political science) and the
    humanities, everything that depends on the non-experimental analysis
    of past data. I am convinced that these disciplines do not provide
    much understanding of the world or even their own subject matter; they
    mostly fit a nice sounding narrative that caters to our desire (even
    need) to have a story. The implications are quite against conventional
    wisdom. You do not gain much by reading the newspapers, history books,
    analyses and economic reports; all you get is misplaced confidence
    about what you know. The difference between a cab driver and a history
    professor is only cosmetic as the latter can express himself in a
    better way.

    There is convincing but only partial empirical evidence of this
    effect. The evidence can only be seen in the disciplines that offer
    both quantitative data and quantitative predictions by the experts,
    such as economics. Economics and finance are an empiricist's dream as
    we have a goldmine of data for such testing. In addition there are
    plenty of "experts", many of whom make more than a million a year, who
    provide forecasts and publish them for the benefits of their clients.
    Just check their forecasts against what happens after. Their
    projections fare hardly better than random, meaning that their
    "stories" are convincing, beautiful to listen to, but do not seem to
    help you more than listening to, say, a Chicago cab driver. This
    extends to inflation, growth, interest rates, balance of payment, etc.
    (While someone may argue that their forecasts might impact these
    variables, the mechanism of "self-canceling prophecy" can be taken
    into account). Now consider that we depend on these people for
    governmental economic policy!

    This implies that whether or not you read the newspapers will not make
    the slightest difference to your understanding of what can happen in
    the economy or the markets. Impressive tests on the effect of the news
    on prices were done by the financial empiricist Victor Niederhoffer in
    the 60s and repeated throughout with the same results.

    If you look closely at the data to check the reasons of this inability
    to see things coming, you will find that these people tend to guess
    the regular events (though quite poorly); but they miss on the large
    deviations, these " unusual" events that carry large impacts. These
    outliers have a disproportionately large contribution to the total
    effect.

    Now I am convinced, yet cannot prove it quantitatively, that such
    overestimation can be generalized to anything where people give you a
    narrative-style story from past information, without experimentation.
    The difference is that the economists got caught because we have data
    (and techniques to check the quality of their knowledge) and
    historians, news analysts, biographers, and "pundits" can hide a
    little longer. Basically historians might get a small trend here and
    there, but they did miss on the big events of the past centuries and,
    I am convinced, will not see much coming in the future. It was said:
    "the wise see things coming". To me the wise persons are the ones who
    know that they can't see things coming.

                                                   |[440]back to contents|
    ______________________________________________________________________

    [441]TODD FEINBERG, M.D.
    Psychiatrist and Neurologist, Albert Einstein College of Medicine;
    Author, Altered Egos

    [feinberg100.jpg] I believe the human race will never decide that an
    advanced computer possesses consciousness. Only in science fiction
    will a person be charged with murder if they unplug a PC. I believe
    this because I hold, but cannot yet prove, that in order for an entity
    to be consciousness and possess a mind, it has to be a living being.

    Being alive, of course, does not guarantee the presence of a mind. For
    example, a plant carries on the necessary metabolic functions to be
    alive, but still does not possess a mind. A chimpanzee, on the other
    hand, is a different story. All the behavioral features we share with
    chimps in addition to life, such as intelligence, the ability to
    deceive, mirror self-recognition, some individual social identity,
    make chimps seem so much like us that many in the scientific community
    intuitively grant chimps "beinghood" and consciousness.
    In addition to being alive, therefore, it appears that a living thing
    must be a being, must possess a self, to possess a mind. But silicon
    chips are not alive, and computers are not beings. I argue that this
    is so because the particular material substance and arrangement of the
    brain is essential to the creation of consciousness and "beinghood."
    Computers will never achieve consciousness because in order for a
    computer to be "conscious like us" it will need to be made of living
    stuff like us, to grow like us, and unfortunately, to be able to die
    like us.

                                                   |[442]back to contents|
    ______________________________________________________________________

    [443]KAI KRAUSE
    Software: Concepts, Artwork & Interface Design; Byteburg Research Lab
    above the Rhein River
    [krause100.jpg] I always felt, but can't prove outright: Zen is wrong.
    Then is right. Everything is not about the now, as in the "here and
    how", "living for the moment" On the contrary: I believe everything is
    about the before then and the back then.

    It is about the anticipation of the moment and the memory of the
    moment, but not the moment.

    In German there is a beautiful little word for it: "Vorfreude", which
    still is a shade different from "delight" or "pleasure" or even
    "anticipation". It is the "Pre-Delight", the "Before-Joy", or as a
    little linguistic concoction: the "ForeFun"; in a single word trying
    to express the relationship of time, the pleasure of waiting for the
    moment to arrive, the can't wait moments of elation, of hoping for
    some thing, some one, some event to happen.

    Whether it's on a small scale like that special taste of your favorite
    food, waiting to see a loved one, that one moment in a piece of music,
    a sequence in a movie....or the larger versions: the expectation of a
    beautiful vacation, the birth of a baby, your acceptance of an Oscar.

    We have been told by wise men, Dalais and Maharishis that it is
    supposedly all about those moments, to cherish the second it happens
    and never mind the continuance of time...

    But for me, since early childhood days, I realized somehow: the beauty
    lies in the time before, the hope for, the waiting for, the imaginary
    picture painted in perfection of that instant in time. And then, once
    it passes, in the blink of an eye, it will be the memory which really
    stays with you, the reflection, the remembrance of that time. Cherish
    the thought..., remember how....

    Nothing ever is as beautiful as its abstraction through the
    rose-colored glasses of anticipation...The toddlers hope for Santa
    Claus on Christmas eve turns out to be a fat guy with a fashion issue.
    Waiting for the first kiss can give you waves of emotional shivers up
    your spine, but when it then actually happens, it's a bunch of
    molecules colliding, a bit of a mess, really.  It is not the real
    moment that matters. In Anticipation the moment will be glorified by
    innocence, not knowing yet. In Remembrance the moment will be
    sanctified by memory filters, not knowing any more.

    In the Zen version, trying to uphold the beauty of the moment in that
    moment is in my eyes a sad undertaking. Not so much because it can be
    done, all manner of techniques have been put forth how to be a happy
    human by mastering the art of it.  But it also implies, by definition,
    that all those other moments live just as much under the spotlight:
    the mundane, the lame, the gross, the everyday routines of dealing
    with life's mere mechanics.

    In the Then version, it is quite the opposite: the long phases before
    and after last hundreds or thousands of times longer than the moment,
    and drown out the everyday humdrum entirely.

    Bluntly put: spend your life in the eternal bliss of always having
    something to hope for, something to wait for, plans not realized,
    dreams not come true.... Make sure you have new points on the horizon,
    that you purposely create. And at the same time, relive your memories,
    uphold and cherish them, keep them alive and share them, talk about
    them.

    Make plans and take pictures.

    I have no way of proving such a lofty philosophical theory, but I
    greatly anticipate the moment that I might... and once I have done it,
    I will, most certainly, never forget.

                                                   |[444]back to contents|
    ______________________________________________________________________

    [445]ELIZABETH SPELKE
    Psychologist, Harvard University
    [spelke100.jpg] I believe, first, that all people have the same
    fundamental concepts, values, concerns, and commitments, despite our
    diverse languages, religions, social practices, and expressed beliefs.
    If defenders and opponents of abortion, Israelis and Palestinians, or
    Cambridge intellectuals and Amazonian jungle dwellers were to get
    beyond their surface differences, each would discover that the common
    ground linking them to members of the other group equals that which
    binds their own group together. Our common conceptual and moral
    commitments spring from the core cognitive systems that allow an
    infant to grow rapidly and spontaneously into a competent participant
    in any human society.

    Second, one of our shared core systems centers on a notion that is
    false: the notion that members of different human groups differ
    profoundly in their concepts and values. This notion leads us to
    interpret the superficial differences between people as signs of
    deeper differences. It has quite a grip on us: Many people would lay
    down their lives for perfect strangers from their own community, while
    looking with suspicion at members of other communities. And all of us
    are apt to feel a special pull toward those who speak our language and
    share our ethnic background or religion, relative to those who don't.

    Third, the most striking feature of human cognition stems not from our
    core knowledge systems but from our capacity to rise above them.
    Humans are capable of discovering that our core conceptions are false,
    and of replacing them with truer ones. This change has happened
    dramatically in the domain of astronomy. Core capacities to perceive,
    act on, and reason about the surface layout predispose us to believe
    that the earth is a flat, extended surface on which gravity acts as a
    downward force. This belief has been decisively overturned, however,
    by the progress of science. Today, every child who plays computer
    games or watches Star Wars knows that the earth is one sphere among
    many, and that gravity pulls all these bodies toward one another.

    Together, my three beliefs suggest a fourth. If the cognitive sciences
    are given sufficient time, the truth of the claim of a common human
    nature eventually will be supported by evidence as strong and
    convincing as the evidence that the earth is round. As humans are
    bathed in this evidence, we will overcome our misconceptions of human
    differences. Ethnic and religious rivalries and conflicts will come to
    seem as pointless as debates over the turtles that our pancake earth
    sits upon, and our common need for a stable, sustainable environment
    for all people will be recognized. But this fourth belief is
    conditional. Our species is caught in a race between the progress of
    our science and the escalation both of our intergroup conflicts and of
    the destructive means to pursue them. Will humans last long enough for
    our science to win this race?

                                                   |[446]back to contents|
    ______________________________________________________________________

    [447]SAM HARRIS
    Neuroscience Graduate Student, UCLA; Author, The End of Faith

    [harris.s100.jpg] Twenty-two percent of Americans claim to be certain
    that Jesus will return to earth to judge the living and the dead
    sometime in the next fifty years. Another twenty-two percent believe
    that he is likely to do so. The problem that most interests me at this
    point, both scientifically and socially, is the problem of belief
    itself. What does it mean, at the level of the brain, to believe that
    a proposition is true? The difference between believing and
    disbelieving a statement--Your spouse is cheating on you; you've just
    won ten million dollars--is one of the most potent regulators of human
    behavior and emotion. The instant we accept a given representation of
    the world as true, it becomes the basis for further thought and
    action; rejected as false, it remains a string of words.

    What I believe, though cannot yet prove, is that belief is a
    content-independent process. Which is to say that beliefs about
    God--to the degree that they are really believed--are the same as
    beliefs about numbers, penguins, tofu, or anything else. This is not
    to say that all of our representations of the world are acquired
    through language, or that all linguistic representations are on the
    same logical footing. And we know that different regions of the brain
    are involved in judging the truth-value of statements drawn from
    different content domains. What I do believe, however, is that the
    neural processes that govern the final acceptance of a statement as
    "true" rely on more fundamental, reward-related circuitry in our
    frontal lobes--probably the same regions that judge the pleasantness
    of tastes and odors. Truth may be beauty, and beauty truth, in more
    than a metaphorical sense. And false statements may, quite literally,
    disgust us.

    Once the neurology of belief becomes clear, and it stands revealed as
    an all-purpose emotion arising in a wide variety of contexts (often
    without warrant), religious faith will be exposed for what it is: a
    humble species of terrestrial credulity. We will then have additional,
    scientific reasons to declare that mere feelings of conviction are not
    enough when it comes time to talk about the way the world is. The only
    thing that guarantees that (sufficiently complex) beliefs actually
    represent the world, are chains of evidence and argument linking them
    to the world. Only on matters of religious faith do sane men and women
    regularly dispute this fact. Apart from removing the principle reason
    we have found to kill one another, a revolution in our thinking about
    religious belief would clear the way for new approaches to ethics and
    spiritual experience. Both ethics and spirituality lie at the very
    heart of what is good about being human, but our thinking on both
    fronts has been shackled to the preposterous for millennia.
    Understanding belief at the level of the brain may hold the key to new
    insights into the nature of our minds, to new rules of discourse, and
    to new frontiers of human cooperation.

                                                   |[448]back to contents|
    ______________________________________________________________________

    [449]LYNN MARGULIS
    Biologist, University of Massachusetts, Amherst; Author, Symbiosis in
    Cell Evolution.
    [margulis100.jpg] I feel that I know something that will turn out to
    be correct and eventually proved to be true beyond doubt
    What?

    That our ability to perceive signals in the environment evolved
    directly from our bacterial ancestors. That is, we, like all other
    mammals including our apish brothers detect odors, distinguish tastes,
    hear bird song and drum beats and we too feel the vibrations of the
    drums. With our eyes closed we detect the light of the rising sun.
    These abilities to sense our surroundings are a heritage that preceded
    the evolution of all primates, all vertebrate animals, indeed all
    animals. Such sensitivities to wafting plant scents, tasty salted
    mixtures, police cruiser sirens, loving touches and star light
    register because of our "sensory cells".

    These avant guard cells of the nasal passages, the taste buds, the
    inner ear, the touch receptors in the skin and the retinal rods and
    cones all have in common the presence at their tips of projections
    ("cell processes") called cilia. Cilia have a recognizable fine
    structure. With a very high power ("electron") microscope a precise
    array of protein tubules, nine, exactly nine pairs of tubules are
    arranged in a circular array and two singlet tubules are in the center
    of this array. All sensory cells have this common feature whether in
    the light-sensitive retina of the eye or the balance-sensitive
    semicircular canals of the inner ear. Cross-section slices of the
    tails of human, mouse and even insect (fruit-fly) sperm all share this
    same instantly recognizable structure too. Why this peculiar pattern?
    No one knows for sure but it provides the evolutionist with a strong
    argument for common ancestry. The size (diameter) of the circle (0.25
    micrometers) and of the constituent tubules (0.024 micrometers)
    aligned in the circle is identical in the touch receptors of the human
    finger and the taste buds of the elephant.

    What do I feel that I know, what Oscar Wilde said (that "even true
    things can be proved")?

    Not only that the sensory cilia derive from these exact 9-fold
    symmetrical structures in protists such as the "waving feet" of the
    paramecium or the tail of the vaginal-itch protist called Trichomonas
    vaginalis. Indeed, all biologists agree with the claim that sperm
    tails and all these forms of sensory cilia share a common ancestry.

    But I go much farther. I think the the common ancestor of the cilium,
    but not the rest of the cell, was a free-swimming entity, a skinny
    snake-like bacterium that, 1500 million years ago squiggled through
    muds in a frantic search for food. Attracted by some smells and
    repelled by others the bacteria, by themselves, already enjoyed a
    repertoire of sensory abilities that remain with their descendants to
    this day. In fact, this bacterial ancestor of the cilium never went
    extinct, rather some of its descendants are uncomfortably close to us
    today. This hypothetical bacterium, ancestor to all the cilia, was no
    ordinary rod-shaped little dot.

    No, this bacterium who still has many live relatives, entered into
    symbiotic partnerships with other very different kinds of bacteria.
    Together this two component partnership swam and stuck together both
    persisted. What kind of bacterium became an attached symbiont that
    impelled its partner forward? None other than a squirming spirochete
    bacterium.

    The spirochete group of bacteria includes many harmless mud-dwellers
    but it also contains a few scary freaks: the treponeme of syphilis and
    the borrelias of Lyme disease. We animals got our exquisite ability to
    sense our surroundings--to tell light from dark, noise from silence,
    motion from stillness and fresh water from brackish brine--from a kind
    of bacterium whose relatives we despise. Cilia were once free-agents
    but they became an integral part of all animal cells. Even though the
    concept that cilia evolved from spirochetes has not been proved I
    think it is true. Not only is it true but, given the powerful new
    techniques of molecular biology I think the hypothesis will be
    conclusively proved. In the not-too-distant future people will wonder
    why so many scientists were so against my idea for so long!

                                                   |[450]back to contents|
    ______________________________________________________________________

    [451]GREGORY BENFORD
    Physicist, UC Irvine; Author, Deep Time

    [benford100.jpg] Why is there scientific law at all?

    We physicists explain the origin and structure of matter and energy,
    but not the laws that do this. Does the idea of causation apply to
    where the laws themselves came from? Even Alan Guth's "free lunch"
    gives us the universe after the laws start acting. We have narrowed
    down the range of field theories that can yield the big bang universe
    we live in, but why do the laws that govern it seem to be constant in
    time, and always at work?

    One can imagine a universe in which laws are not truly lawful. Talk of
    miracles does just this, when God is supposed to make things work.
    Physics aims to find The Laws and hopes that these will be uniquely
    constrained, as when Einstein wondered if God had any choice when He
    made the universe. One fashionable escape hatch from this asserts that
    there are infinitely many universes, each sealed off from the others,
    which can obey any sort of law one can imagine, with parameters or
    assumptions changed. This "multiverse" view represents the failure of
    our grand agenda, of course, and seems to me contrary to Occam's
    Razor--solving our lack of understanding by multiplying unseen
    entities into infinity.

    Perhaps it is a similar philosophical failure of imagination to think,
    as I do, that when we see order, there is usually an ordering
    principle. But what can constrain the nature of physical law?
    Evolution gave us our ornately structured biosphere, and perhaps a
    similar principle operates in selecting universes. Perhaps our
    universe arises, then, from selection for intelligences that can make
    fresh universes, perhaps in high energy physics experiments. Or near
    black holes (as Lee Smiolin supposed), where space-time gets contorted
    into plastic forms that can make new space-times. Then an Ur-universe
    that had intelligence could make others, and this reproduction with
    perhaps slight variation ion "genetics" drives the evolution of
    physical law.

    Selection arises because only firm laws can yield constant, benign
    conditions to form new life. Ed Harrison had similar ideas. Once life
    forms realize this, they could intentionally make more smart universes
    with the right, fixed laws, to produce ever more grand structures.
    There might be observable consequences of this prior evolution, If so,
    then we are an inevitable consequence of the universe, mirroring
    intelligences that have come before, in some earlier universe that
    deliberately chose to create more sustainable order. The fitness of
    our cosmic environment is then no accident. If we find evidence of
    fine-tuning in the Dyson and Rees sense, then, is this evidence for
    such views?

                                                   |[452]back to contents|
    ______________________________________________________________________

    [453]ARNOLD TREHUB
    Psychologist, University of Massachusetts, Amherst; Author, The
    Cognitive Brain.
    [trehub100.jpg] I have proposed a law of conscious content which
    asserts that for any experience, thought, question, or solution, there
    is a corresponding analog in the biophysical state of the brain. As a
    corollary to this principle, I have argued that conventional attempts
    to understand consciousness by simply searching for the neural
    correlates of consciousness (NCC) in theoretical and empirical
    investigations are too weak to ground a good understanding of
    conscious content. Instead, I have proposed that we go beyond NCC and
    explore brain events that have at least some similarity to our
    phenomenal experiences, namely, neuronal analogs of conscious content
    (NAC). In support of this approach, I have presented a theoretical
    model that goes beyond addressing the sheer correlation between mental
    states and neuronal events in the brain. It explains how neuronal
    analogs of phenomenal experience (NAC) can be generated, and it
    details how other essential human cognitive tasks can be accomplished
    by the particular structure and dynamics of putative neuronal
    mechanisms and systems in the brain.

    A large body of experimental findings, clinical findings, and
    phenomenal reports can be explained within a coherent framework by the
    neuronal structure and dynamics of my theoretical model. In addition,
    the model accurately predicts many classical illusions and perceptual
    anomalies. So I believe that the neuronal mechanisms and systems that
    I have proposed provide a true explanation for many important aspects
    of human cognition and phenomenal experience. But I can't prove it. Of
    course, competing theories about the brain, cognition, and
    consciousness can't be proved either. But I can't prove it. Providing
    the evidence is the best we can do--I think.

                                                   |[454]back to contents|
    ______________________________________________________________________

    [455]JUDITH RICH HARRIS
    Writer and Developmental Psychologist; Author, The Nurture Assumption
    [harris100.jpg] I believe, though I cannot prove it, that three--not
    two--selection processes were involved in human evolution.

    The first two are familiar: natural selection, which selects for
    fitness, and sexual selection, which selects for sexiness.

    The third process selects for beauty, but not sexual beauty--not adult
    beauty. The ones doing the selecting weren't potential mates: they
    were parents. Parental selection, I call it.

    What gave me the idea was a passage from a book titled Nisa: The Life
    and Words of a !Kung Woman, by the anthropologist Marjorie Shostak.
    Nisa was about fifty years old when she recounted to Shostak, in
    remarkable detail, the story of her life as a member of a
    hunter-gatherer group.

    One of the incidents described by Nisa occurred when she was a child.
    She had a brother named Kumsa, about four years younger than herself.
    When Kumsa was around three, and still nursing, their mother realized
    she was pregnant again. She explained to Nisa that she was planning to
    "kill"--that is, abandon at birth--the new baby, so that Kumsa could
    continue to nurse. But when the baby was born, Nisa's mother had a
    change of heart. "I don't want to kill her," she told Nisa. "This
    little girl is too beautiful. See how lovely and fair her skin is?"

    Standards of beauty differ in some respects among human societies; the
    !Kung are lighter-skinned than most Africans and perhaps they pride
    themselves on this feature. But Nisa's story provides a insight into
    two practices that used to be widespread and that I believe played an
    important role in human evolution: the abandonment of newborns that
    arrived at inopportune times (this practice has been documented in
    many human societies by anthropologists), and the use of aesthetic
    criteria to tip the scales in doubtful cases.

    Coupled with sexual selection, parental selection could have produced
    certain kinds of evolutionary changes very quickly, even if the
    heartbreaking decision of whether to rear or abandon a newborn was
    made in only a small percentage of births. The characteristics that
    could be affected by parental selection would have to be apparent even
    in a newborn baby. Two such characteristics are skin color and
    hairiness.

    Parental selection can help to explain how the Europeans, who are
    descended from Africans, developed white skin over such a short period
    of time. In Africa, a cultural preference for light skin (such as
    Nisa's mother expressed) would have been counteracted by other factors
    that made light skin impractical. But in less sunny Europe, light skin
    may actually have increased fitness, which means that all three
    selection processes might have worked together to produce the rapid
    change in skin color.

    Parental selection coupled with sexual selection can also account for
    our hairlessness. In this case, I very much doubt that fitness played
    a role; other mammals of similar size--leopards, lions, zebras,
    gazelle, baboons, chimpanzees, and gorillas--get along fine with fur
    in Africa, where the change to hairlessness presumably took place. I
    believe (though I cannot prove it) that the transition to hairlessness
    took place quickly, over a short evolutionary time period, and
    involved only Homo sapiens or its immediate precursor.

    It was a cultural thing. Our ancestors thought of themselves as
    "people" and thought of fur-bearing creatures as "animals," just as we
    do. A baby born too hairy would have been distinctly less appealing to
    its parents.

    If I am right that the transition to hairlessness occurred very late
    in the sequence of evolutionary changes that led to us, then this can
    explain two of the mysteries of paleoanthropology: the survival of the
    Neanderthals in Ice Age Europe, and their disappearance about 30,000
    years ago.

    I believe, though I cannot prove it, that Neanderthals were covered
    with a heavy coat of fur, and that Homo erectus, their ancestor, was
    as hairy as the modern chimpanzee. A naked Neanderthal could never
    have made it through the Ice Age. Sure, he had fire, but a blazing
    hearth couldn't keep him from freezing when he was out on a hunt. Nor
    could a deerskin slung over his shoulders, and there is no evidence
    that Neanderthals could sew. They lived mostly on game, so they had to
    go out to hunt often, no matter how rotten the weather. And the game
    didn't hang around conveniently close to the entrance to their cozy
    cave.

    The Neanderthals disappeared when Homo sapiens, who by then had
    learned the art of sewing, took over Europe and Asia. This new
    species, descended from a southern branch of Homo erectus, was unique
    among primates in being hairless. In their view, anything with fur on
    it could be classified as "animal"--or, to put it more bluntly, game.
    Neanderthal disappeared in Europe for the same reason the woolly
    mammoth disappeared there: the ancestors of the modern Europeans ate
    them. In Africa today, hungry humans eat the meat of chimpanzees and
    gorillas.

    At present, I admit, there is insufficient evidence either to confirm
    or disconfirm these suppositions. However, evidence to support my
    belief in the furriness of Neanderthals may someday be found.
    Everything we currently know about this species comes from hard stuff
    like rocks and bones. But softer things, such as fur, can be preserved
    in glaciers, and the glaciers are melting. Someday a hiker may come
    across the well-preserved corpse of a furry Neanderthal.

                                                   |[456]back to contents|
    ______________________________________________________________________

    [457]BRUCE STERLING
    Novelist; Author, Globalhead
    [sterling100.jpg]
    I can sum my intuition up in five words: we're in for climatic mayhem.

                                                   |[458]back to contents|
    ______________________________________________________________________

    [459]ALAN KAY
    Computer Scientist; Personal Computer Visionary, Senior Fellow, HP
    Labs
    [kay100.jpg] Einstein said "You must learn to distinguish between what
    is true and what is real". An apt longer quote of his is: "As far as
    the laws of mathematics refer to reality, they are not certain; and as
    far as they are certain, they do not refer to reality". I.e. it is
    "true" that the three angles of a triangle add up to 180 in Euclidean
    geometry of the plane, but it is not known how to show that this could
    hold in our physical universe (if there is any mass or energy in our
    universe then it doesn't seem to hold, and it is not actually known
    what our universe would be like without any mass or energy).

    So, science is a relationship between what we can represent and are
    able to think about, and "what's out there": it's an extension of good
    map making, most often using various forms of mathematics as the
    mapping languages. When we guess in science we are guessing about
    approximations and mappings to languages, we are not guessing about
    "the truth" (and we are not in a good state of mind for doing science
    if we think we are guessing "the truth" or "finding the truth"). This
    is not at all well understood outside of science, and there are
    unfortunately a few people with degrees in science who don't seem to
    understand it either.

    Sometimes in math one can guess a theorem that can be proved true.
    This is a useful process even if one's batting average is less than
    .500. Guessing in science is done all the time, and the difference
    between what is real and what is true is not a big factor in the
    guessing stage, but makes all the difference epistemologically later
    in the process.

    One corner of computing is a kind of mathematics (other corners
    include design, engineering, etc.). But there are very few interesting
    actual proofs in computing. A good Don Knuth quote is: "Beware of bugs
    in the above code; I have only proved it correct, not tried it."

    An analogy for why this is so is to the n-body problems (and other
    chaotic systems behaviors) in physics. An explosion of degrees of
    freedom (3 bodies and gravity is enough) make a perfectly
    deterministic model impossible to solve analytically for a future
    state. However, we can compute any future state by brute force
    simulation and see what happens. By analogy, we'd like to prove useful
    programs correct, but we either have intractable degrees of freedom,
    or as in the Knuth quote, it is very difficult to know if we've
    actually gathered all the cases when we do a "proof".

    So a guess in computing is often architectural or a collection of
    "covering heuristics". An example of the latter is TCP/IP which has
    allowed "the world's largest and most scalable artifact--The
    Internet--to be successfully built. An example of the former is the
    guess I made in 1966 about objects--not that one could build
    everything from objects--that could be proved mathematically--but that
    using objects would be a much better way to represent most things.
    This is not very provable, but like the Internet, now has quite a body
    of evidence that suggests this was a good guess.

    Another guess I made long ago--that does not yet have a body of
    evidence to support it--is that what is special about the computer is
    analogous to and an advance on what was special about writing and then
    printing. It's not about automating past forms that has the big
    impact, but as McLuhan pointed out, when you are able to change the
    nature of representation and argumentation, those who learn these new
    ways will wind up to be qualtitatively different and better thinkers,
    and this will (usually) help advance our limited conceptions of
    civilization.

    This still seems like a good guess to me--but "truth" has nothing to
    do with it.

                                                   |[460]back to contents|
    ______________________________________________________________________

    [461]ROGER SCHANK
    Psychologist & Computer Scientist; Author, Designing World-Class
    E-Learning
    [rcs100.jpg] Irrational choices.

    I do not believe that people are capable of rational thought when it
    comes to making decisions in their own lives. People believe that are
    behaving rationally and have thought things out, of course, but when
    major decisions are made--who to marry, where to live, what career to
    pursue, what college to attend, people's minds simply cannot cope with
    the complexity. When they try to rationally analyze potential options,
    their unconscious, emotional thoughts take over and make the choice
    for them.

    As an example of what I mean consider a friend of mine who was told to
    select a boat as a wedding present by his father in law. He chose a
    very peculiar boat which caused a real rift between him and his bride.
    She had expected a luxury cruiser, which is what his father in law had
    intended. Instead he selected a very rough boat that he could fashion
    as he chose. As he was an engineer his primary concern was how it
    would handle open ocean and he made sure the engines were special ones
    that could be easily gotten at and that the boat rode very low in the
    water. When he was finished he created a very functional but very ugly
    and uncomfortable boat.

    Now I have ridden with him on his boat many times. Always he tells me
    about its wonderful features that make it a rugged and very useful
    boat. But, the other day, as we were about to start a trip, he started
    talking about how pretty he thought his boat was, how he liked the
    wood, the general placement of things, and the way the rooms fit
    together. I asked him if he was describing a boat that he had been
    familiar with as a child and suggested that maybe this boat was really
    a copy of some boat he knew as a kid. He said, after some thought,
    that that was exactly the case, there had been a boat like in his
    childhood and he had liked it a great deal.

    While he was arguing with his father in law, his wife, and nearly
    everyone he knew about his boat, defending his decision with all the
    logic he could muster, destroying the very conceptions of boats they
    had in mind, the simple truth was his unconscious mind was ruling the
    decision making process. It wanted what it knew and loved, too bad for
    the conscious which had to figure how to explain this to everybody
    else.

    Of course, psychoanalysts have made a living on trying to figure out
    why people make the decisions they do. The problem with psychoanalysis
    is that it purports to be able to cure people. This possibility I
    doubt very much. Freud was a doctor so I guess he got paid to fix
    things and got carried away. But his view of the unconscious basis of
    decision making was essentially correct. We do not know how we decide
    things, and in a sense we don't really care. Decisions are made for us
    by our unconscious, the conscious is in charge of making up reasons
    for those decisions which sound rational. We can, on the other hand,
    think rationally about the choices that other people make. We can do
    this because we do not know and are not trying to satisfy unconscious
    needs and childhood fantasies. As for making good decisions in our
    lives, when we do it is mostly random. We are always operating with
    too little information consciously and way too much unconsciously.

                                                   |[462]back to contents|
    ______________________________________________________________________

    [463]GINO SEGRE
    Physicist, University of Pennsylvania; Author, A Matter of Degrees
    [segre100.jpg] The Big Bang, that giant explosion of more than 13
    billion years ago, provides the accepted description of our Universe's
    beginning. We can trace with exquisite precision what happened during
    the expansion and cooling that followed that cataclysm, but the
    presence of neutrinos in that earliest phase continues to elude direct
    experimental confirmation.

    Neutrinos, once in thermal equilibrium, were supposedly freed from
    their bonds to other particles about two seconds after the Big Bang.
    Since then they should have been roaming undisturbed through
    intergalactic space, some 200 of them in every cubic centimeter of our
    Universe, altogether a billion of them for every single atom. Their
    presence is noted indirectly in the Universe's expansion. However,
    though they are presumably by far the most numerous type of material
    particle in existence, not a single one of those primordial neutrinos
    has ever been detected. It is not for want of trying, but the
    necessary experiments are almost unimaginably difficult. And yet those
    neutrinos must be there. If they are not, our whole picture of the
    early Universe will have to be totally reconfigured.

    Wolfgang Pauli's original 1930 proposal of the neutrino's existence
    was so daring he didn't publish it. Enrico Fermi's brilliant 1934
    theory of how neutrinos are produced in nuclear events was rejected
    for publication by Nature magazine as being too speculative. In the
    1950s neutrinos were detected in nuclear reactors and soon afterwards
    in particle accelerators. Starting in the 1960s, an experimental tour
    de force revealed their existence in the solar core. Finally, in1987 a
    ten second burst of neutrinos was observed radiating outward from a
    supernova collapse that had occurred almost 200,000 years ago. When
    they reached the Earth and were observed, one prominent physicist
    quipped that extra-solar neutrino astronomy "had gone in ten seconds
    from science fiction to science fact". These are some of the
    milestones of 20th century neutrino physics.

    In the 21st century we eagerly await another one, the observation of
    neutrinos produced in the first seconds after the Big Bang. We have
    been able to identify them, infer their presence, but will we be able
    to actually see these minute and elusive particles? They must be
    everywhere around us, even though we still cannot prove it.

                                                   |[464]back to contents|
    ______________________________________________________________________

    [465]PIET HUT
    Astrophysicist, Institute of Advanced Study
    [hut100.jpg] Science, like most human activities, is based on a
    belief, namely the assumption that nature is understandable.

    If we are faced with a puzzling experimental result, we first try
    harder to understand it with currently available theory, using more
    clever ways to apply that theory. If that really doesn't work, we try
    to improve or perhaps even replace the theory. We never conclude that
    a not-yet understood result is in principle un-understandable.

    While some philosophers might draw a different conclusion--see the
    contribution by Nicholas Humphrey--as a scientist I strongly believe
    that Nature is understandable. And such a belief can neither be proved
    nor disproved.

    Note: undoubtedly, the notion of what counts as "understandable" will
    continue to change. What physicists consider to be understandable now
    is very different from what had been regarded as such one hundred
    years ago. For example, quantum mechanics tells us that repeating the
    same experiment will give different results. The discovery of quantum
    mechanics led us to relax the rigid requirement of a deterministic
    objective reality to a statistical agreement with a not fully
    determinable reality. Although at first sight such a restriction might
    seem to limit our understanding, we in fact have gained a far deeper
    understanding of matter through the use of quantum mechanics than we
    could possibly have obtained using only classical mechanics.

                                                   |[466]back to contents|
    ______________________________________________________________________

    [467]CLIFFORD PICKOVER
    Computer scientist, IBM's T. J. Watson Research Center; Author,
    Calculus and Pizza
    [pickover100.jpg] If we believe that consciousness is the result of
    patterns of neurons in the brain, our thoughts, emotions, and memories
    could be replicated in moving assemblies of Tinkertoys. The Tinkertoy
    minds would have to be very big to represent the complexity of our
    minds, but it nevertheless could be done, in the same way people have
    made computers out of 10,000 Tinkertoys. In principle, our minds could
    be hypostatized in patterns of twigs, in the movements of leaves, or
    in the flocking of birds. The philosopher and mathematician Gottfried
    Leibniz liked to imagine a machine capable of conscious experiences
    and perceptions. He said that even if this machine were as big as a
    mill and we could explore inside, we would find "nothing but pieces
    which push one against the other and never anything to account for a
    perception."

    If our thoughts and consciousness do not depend on the actual
    substances in our brains but rather on the structures, patterns, and
    relationships between parts, then Tinkertoy minds could think. If you
    could make a copy of your brain with the same structure but using
    different materials, the copy would think it was you. This seemingly
    materialistic approach to mind does not diminish the hope of an
    afterlife, of transcendence, of communion with entities from parallel
    universes, or even of God. Even Tinkertoy minds can dream, seek
    salvation and bliss--and pray.

                                                   |[468]back to contents|
    ______________________________________________________________________

    [469]SUSAN BLACKMORE
    Psychologist, Visiting Lecturer, University of the West of England,
    Bristol; Author The Meme Machine
    [backmore.100.jpg] It is possible to live happily and morally without
    believing in free will. As Samuel Johnson said "All theory is against
    the freedom of the will; all experience is for it." With recent
    developments in neuroscience and theories of consciousness, theory is
    even more against it than it was in his time, more than 200 years ago.
    So I long ago set about systematically changing the experience. I now
    have no feeling of acting with free will, although the feeling took
    many years to ebb away.

    But what happens? People say I'm lying! They say it's impossible and
    so I must be deluding myself to preserve my theory. And what can I do
    or say to challenge them? I have no idea--other than to suggest that
    other people try the exercise, demanding as it is.
    When the feeling is gone, decisions just happen with no sense of
    anyone making them, but then a new question arises--will the decisions
    be morally acceptable? Here I have made a great leap of faith (or the
    memes and genes and world have done so). It seems that when people
    throw out the illusion of an inner self who acts, as many mystics and
    Buddhist practitioners have done, they generally do behave in ways
    that we think of as moral or good. So perhaps giving up free will is
    not as dangerous as it sounds--but this too I cannot prove.
    As for giving up the sense of an inner conscious self altogether--this
    is very much harder. I just keep on seeming to exist. But though I
    cannot prove it--I think it is true that I don't.

                                                   |[470]back to contents|
    ______________________________________________________________________

    [471]KEITH DEVLIN
    Mathematician, Stanford University; Author, The Millennium Problems
    [devlin100.jpg] Before we can answer this question we need to agree
    what we mean by proof. (This is one of the reasons why its good to
    have mathematicians around. We like to begin by giving precise
    definitions of what we are going to talk about, a pedantic tendency
    that sometimes drives our physicist and engineering colleagues crazy.)
    For instance, following Descartes, I can prove to myself that I exist,
    but I can't prove it to anyone else. Even to those who know me well
    there is always the possibility, however remote, that I am merely a
    figment of their imagination. If it's rock solid certainty you want
    from a proof, there's almost nothing beyond our own existence
    (whatever that means and whatever we exist as) that we can prove to
    ourselves, and nothing at all we can prove to anyone else.
    Mathematical proof is generally regarded as the most certain form of
    proof there is, and in the days when Euclid was writing his great
    geometry text Elements that was surely true in an ideal sense. But
    many of the proofs of geometric theorems Euclid gave were subsequently
    found out to be incorrect--David Hilbert corrected many of them in the
    late nineteenth century, after centuries of mathematicians had
    believed them and passed them on to their students--so even in the
    case of a ten line proof in geometry it can be hard to tell right from
    wrong.

    When you look at some of the proofs that have been developed in the
    last fifty years or so, using incredibly complicated reasoning that
    can stretch into hundreds of pages or more, certainty is even harder
    to maintain. Most mathematicians (including me) believe that Andrew
    Wiles proved Fermat's Last Theorem in 1994, but did he really? (I
    believe it because the experts in that branch of mathematics tell me
    they do.)

    In late 2002, the Russian mathematician Grigori Perelman posted on the
    Internet what he claimed was an outline for a proof of the Poincare
    Conjecture, a famous, century old problem of the branch of mathematics
    known as topology. After examining the argument for two years now,
    mathematicians are still unsure whether it is right or not. (They
    think it "probably is.")

    Or consider Thomas Hales, who has been waiting for six years to hear
    if the mathematical community accepts his 1998 proof of astronomer
    Johannes Keplers 360-year-old conjecture that the most efficient way
    to pack equal sized spheres (such as cannonballs on a ship, which is
    how the question arose) is to stack them in the familiar pyramid-like
    fashion that greengrocers use to stack oranges on a counter. After
    examining Hales' argument (part of which was carried out by computer)
    for five years, in spring of 2003 a panel of world experts declared
    that, whereas they had not found any irreparable error in the proof,
    they were still not sure it was correct.

    With the idea of proof so shaky--in practice--even in mathematics,
    answering this year's Edge question becomes a tricky business. The
    best we can do is come up with something that we believe but cannot
    prove to our own satisfaction. Others will accept or reject what we
    say depending on how much credence they give us as a scientist,
    philosopher, or whatever, generally basing that decision on our
    scientific reputation and record of previous work. At times it can be
    hard to avoid the whole thing degenerating into a slanging match. For
    instance, I happen to believe, firmly, that staples of
    popular-science-books and breathless TV-specials such as ESP and
    morphic resonance are complete nonsense, but I can't prove they are
    false. (Nor, despite their repeated claims to the contrary, have the
    proponents of those crackpot theories proved they are true, or even
    worth serious study, and if they want the scientific community to take
    them seriously then the onus if very much on them to make a strong
    case, which they have so far failed to do.)

    Once you recognize that proof is, in practical terms, an unachievable
    ideal, even the old mathematicians standby of Gdel's Incompleteness
    Theorem (which on first blush would allow me to answer the Edge
    question with a statement of my belief that arithmetic is free of
    internal contradictions) is no longer available. Gdel's theorem showed
    that you cannot prove an axiomatically based theory like arithmetic is
    free of contradiction within that theory itself. But that doesn't mean
    you can't prove it in some larger, richer theory. In fact, in the
    standard axiomatic set theory, you can prove arithmetic is free of
    contradictions. And personally, I buy that proof. For me, as a living,
    human mathematician, the consistency of arithmetic has been proved--to
    my complete satisfaction.

    So to answer the Edge question, you have to take a common sense
    approach to proof--in this case proof being, I suppose, an argument
    that would convince the intelligent, professionally skeptical, trained
    expert in the appropriate field. In that spirit, I could give any
    number of specific mathematical problems that I believe are true but
    cannot prove, starting with the famous Riemann Hypothesis. But I think
    I can be of more use by using my mathematician's perspective to point
    out the uncertainties in the idea of proof. Which I believe (but
    cannot prove) I have.

                                                   |[472]back to contents|
    ______________________________________________________________________

    [473]LEONARD SUSSKIND
    Physicist, Stanford University
    [susskind100.jpg] Conversation With a Slow Student
    Student: Hi Prof. I've got a problem. I decided to do a little
    probability experiment--you know, coin flipping--and check some of the
    stuff you taught us. But it didn't work.

    Professor: Well I'm glad to hear that you're interested. What did you
    do?

    Student: I flipped this coin 1,000 times. You remember, you taught us
    that the probability to flip heads is one half. I figured that meant
    that if I flip 1,000 times I ought to get 500 heads. But it didn't
    work. I got 513. What's wrong?

    Professor: Yeah, but you forgot about the margin of error. If you flip
    a certain number of times then the margin of error is about the square
    root of the number of flips. For 1,000 flips the margin of error is
    about 30. So you were within the margin of error.

    Student: Ah, now I get if. Every time I flip 1,000 times I will always
    get something between 970 and 1,030 heads. Every single time! Wow, now
    that's a fact I can count on.

    Professor: No, no! What it means is that you will probably get between
    970 and 1,030.

    Student: You mean I could get 200 heads? Or 850 heads? Or even all
    heads?

    Professor: Probably not.

    Student: Maybe the problem is that I didn't make enough flips. Should
    I go home and try it 1,000,000 times? Will it work better?

    Professor: Probably.

    Student: Aw come on Prof. Tell me something I can trust. You keep
    telling me what probably means by giving me more probablies. Tell me
    what probability means without using the word probably.

    Professor: Hmmm. Well how about this: It means I would be surprised if
    the answer were outside the margin of error.

    Student: My god! You mean all that stuff you taught us about
    statistical mechanics and quantum mechanics and mathematical
    probability: all it means is that you'd personally be surprised if it
    didn't work?

    Professor: Well, uh...

    If I were to flip a coin a million times I'd be damn sure I wasn't
    going to get all heads. I'm not a betting man but I'd be so sure that
    I'd bet my life or my soul. I'd even go the whole way and bet a year's
    salary. I'm absolutely certain the laws of large numbers--probability
    theory--will work and protect me. All of science is based on it. But,
    I can't prove it and I don't really know why it works. That may be the
    reason why Einstein said, "God doesn't play dice." It probably is.

                                                   |[474]back to contents|
    ______________________________________________________________________

    [475]ROBERT SAPOLSKY
    Neuroscientist, Stanford University, Author, A Primate's Memoir

    [sapolsky100.jpg] Well, of course, it is tempting to go for something
    like, "That the wheel, agriculture, and the Macarena were all actually
    invented by yetis." Or to do the sophomoric pseudo-ironic logic twist
    of, "That every truth can eventually be proven." Or to get up my
    hackles, draw up to my full height and intone, "Sir, we scientists
    believe in nothing that cannot be proven by the whetstone of science,
    verily our faith is our lack of faith," and then go off in a lab coat
    and a huff.

    The first two aren't worth the words, and the third just isn't so. No
    matter how many times we read Arrowsmith, scientists are subjective
    humans operating in an ostensibly objective business, so there 's
    probably lots of things we take on faith.

    So mine would be a fairly simple, straightforward case of an
    unjustifiable belief, namely that there is no god(s) or such a thing
    as a soul (whatever the religiously inclined of the right persuasion
    mean by that word). I'm very impressed, moved, by one approach of
    people on the other side of the fence. These are the believers who
    argue that it would be a disaster, would be the very work of
    Beelzebub, for it to be proven that god exists. What good would
    religiosity be if it came with a transparently clear contract, instead
    of requiring the leap of faith into an unknowable void?

    So I'm taken with religious folks who argue that you not only can, but
    should believe without requiring proof. Mine is to not believe without
    requiring proof. Mind you, it would be perfectly fine with me if there
    were a proof that there is no god. Some might view this as a potential
    public health problem, given the number of people who would then run
    damagingly amok. But it's obvious that there's no shortage of folks
    running amok thanks to their belief. So that wouldn 't be a problem
    and, all things considered, such a proof would be a relief--many
    physicists, especially astrophysicists, seem weirdly willing to go on
    about their communing with god about the Big Bang, but in my world of
    biologists, the god concept gets mighty infuriating when you spend
    your time thinking about, say, untreatably aggressive childhood
    leukemia.

    Finally, just to undo any semblance of logic here, I might even
    continue to believe there is no god, even if it was proven that there
    is one. A religious friend of mine once said to me that the concept of
    god is very useful, so that you can berate god during the bad times.
    But it is clear to me that I don't need to believe that there is a god
    in order to berate him.

                                                   |[476]back to contents|
    ______________________________________________________________________

    [477]FREEMAN DYSON
    Physicist, Institute of Advanced Study, Author, Disturbing the
    Universe

    [dysonf100.jpg] Since I am a mathematician, I give a precise answer to
    this question. Thanks to Kurt Gödel, we know that there are true
    mathematical statements that cannot be proved. But I want a little
    more than this. I want a statement that is true, unprovable, and
    simple enough to be understood by people who are not mathematicians.
    Here it is.
    Numbers that are exact powers of two are 2, 4, 8, 16, 32, 64, 128 and
    so on. Numbers that are exact powers of five are 5, 25, 125, 625 and
    so on. Given any number such as 131072 (which happens to be a power of
    two), the reverse of it is 270131, with the same digits taken in the
    opposite order. Now my statement is: it never happens that the reverse
    of a power of two is a power of five.
    The digits in a big power of two seem to occur in a random way without
    any regular pattern. If it ever happened that the reverse of a power
    of two was a power of five, this would be an unlikely accident, and
    the chance of it happening grows rapidly smaller as the numbers grow
    bigger. If we assume that the digits occur at random, then the chance
    of the accident happening for any power of two greater than a billion
    is less than one in a billion. It is easy to check that it does not
    happen for powers of two smaller than a billion. So the chance that it
    ever happens at all is less than one in a billion. That is why I
    believe the statement is true.
    But the assumption that digits in a big power of two occur at random
    also implies that the statement is unprovable. Any proof of the
    statement would have to be based on some non-random property of the
    digits. The assumption of randomness means that the statement is true
    just because the odds are in its favor. It cannot be proved because
    there is no deep mathematical reason why it has to be true. (Note for
    experts: this argument does not work if we use powers of three instead
    of powers of five. In that case the statement is easy to prove because
    the reverse of a number divisible by three is also divisible by three.
    Divisibility by three happens to be a non-random property of the
    digits).
    It is easy to find other examples of statements that are likely to be
    true but unprovable. The essential trick is to find an infinite
    sequence of events, each of which might happen by accident, but with a
    small total probability for even one of them happening. Then the
    statement that none of the events ever happens is probably true but
    cannot be proved.

                                                   |[478]back to contents|
    ______________________________________________________________________

    [479]JOHN McWHORTER
    Linguist, Senior Fellow, Manhattan Institute; Author, Doing Our Own
    Thing

    [mcwhorter100.jpg] This year, researching the languages of Indonesia
    for an upcoming book, I happened to find out about a few very obscure
    languages spoken on one island that are much simpler than one would
    expect.

    Most languages are much, much more complicated than they need to be.
    They take on needless baggage over the millennia simply because they
    can. So, for instance, most languages of Indonesia have a good number
    of prefixes and/or suffixes. Their grammars often force the speaker to
    attend to nuances of difference between active and passive much more
    than a European languages does, etc.

    But here were a few languages that had no prefixes or suffixes at all.
    Nor do they have any tones, like many languages in the world. For one
    thing, languages that have been around forever that have no prefixes,
    suffixes, or tones are very rare worldwide. But then, where we do find
    them, they are whole little subfamilies, related variations on one
    another. Here, though, is a handful of small languages that contrast
    bizarrely with hundreds of surrounding relatives.

    One school of thought in how language changes says that this kind of
    thing just happens by chance. But my work has been showing me that
    contrasts like this are due to sociohistory. Saying that naked
    languages like this are spoken alongside ones as bedecked as Italian
    is rather like saying that kiwis are flightless just "because," rather
    than because their environment divested them of the need to fly.

    But for months I scratched my head over these languages. Why just
    them? Why there?

    So isn't it interesting that the island these languages is spoken on
    is none other than Flores, which has had its fifteen minutes of fame
    this year as the site where skeletons of the "little people" were
    found. Anthropologists have hypothesized that this was a different
    species of Homo. While the skeletons date back 13,000 years ago or
    more, local legend recalls "little people" living alongside modern
    humans, ones who had some kind of language of their own and could
    "repeat back" in modern humans' language.

    The legends suggest that the little people only had primitive language
    abilities, but we can't be sure here: to the untutored layman who
    hasn't taken any twentieth-century anthropology or linguistics
    classes, it is easy to suppose that an incomprehensible language is
    merely babbling.

    Now, I can only venture this highly tentatively now. But what I "know"
    but cannot prove this year is: the reason languages like Keo and Ngada
    are so strangely streamlined on Flores is that an earlier ancestor of
    these languages, just as complex as its family members tend to be, was
    used as second language by these other people and simplified. Just as
    our classroom French and Spanish avoids or streamlines a lot of the
    "hard stuff," people who learn a language as adults usually do not
    master it entirely.

    Specifically, I would hypothesize that the little people were
    gradually incorporated into modern human society over time--perhaps
    subordinated in some way--such that modern human children were hearing
    the little people's rendition of the language as much as a native one.

    This kind of process is why, for example, Afrikaans is a slightly
    simplified version of Dutch. Dutch colonists took on Bushmen as
    herders and nurses, and their children often heard second-language
    Dutch as much as their parents. Pretty soon, this new kind of Dutch
    was everyone's everyday language, and Afrikaans was born.

    Much has been made over the parallels between the evolution of
    languages and the evolution of animals and plants. However, I believe
    that one important difference is that while animals and plants can
    evolve towards simplicity as well as complexity depending on
    conditions, languages do not evolve towards simplicity in any
    significant, overall sense--unless there is some sociohistorical
    factor that puts a spoke in the wheel.

    So normally, languages are always drifting into being like Russian or
    Chinese or Navajo. They only become like Keo and Ngada--or Afrikaans,
    or creole languages like Papiamentu and Haitian, or even, I believe,
    English--because of the intervention of factors like forced labor and
    population relocation. Just maybe, we can now add interspecies contact
    to the list!

                                                   |[480]back to contents|
    ______________________________________________________________________

    [481]MARTIN E.P. SELIGMAN
    Psychologist, University of Pennsylvania, Author, Authentic Happiness

    [seligman100.jpg] The "rotten-to-the-core" assumption about human
    nature espoused so widely in the social sciences and the humanities is
    wrong. This premise has its origins in the religious dogma of original
    sin and was dragged into the secular twentieth century by Freud,
    reinforced by two world wars, the Great Depression, the cold war, and
    genocides too numerous to list. The premise holds that virtue,
    nobility, meaning, and positive human motivation generally are
    reducible to, parasitic upon, and compensations for what is really
    authentic about human nature: selfishness, greed, indifference,
    corruption and savagery. The only reason that I am sitting in front of
    this computer typing away rather than running out to rape and kill is
    that I am "compensated," zipped up, and successfully defending myself
    against these fundamental underlying impulses.

    In spite of its widespread acceptance in the religious and academic
    world, there is not a shred of evidence, not an iota of data, which
    compels us to believe that nobility and virtue are somehow derived
    from negative motivation. On the contrary, I believe that evolution
    has favored both positive and negative traits, and many niches have
    selected for morality, co-operation, altruism, and goodness, just as
    many have also selected for murder, theft, self-seeking, and
    terrorism.

    More plausible than the rotten-to-the-core theory of human nature is
    the dual aspect theory that the strengths and the virtues are just as
    basic to human nature as the negative traits: that negative motivation
    and emotion have been selected for by zero-sum-game survival
    struggles, while virtue and positive emotion have been selected for by
    positive sum game sexual selection. These two overarching systems sit
    side by side in our central nervous system ready to be activated by
    privation and thwarting, on the one hand, or by abundance and the
    prospect of success, on the other.

                                                   |[482]back to contents|
    ______________________________________________________________________

    [483]ALISON GOPNIK
    Psychologist, UC-Berkeley; Coauthor, The Scientist In the Crib

    [gopnik100.jpg] I believe, but cannot prove, that babies and young
    children are actually more conscious, more vividly aware of their
    external world and internal life, than adults are. I believe this
    because there is strong evidence for a functional trade-off with
    development. Young children are much better than adults at learning
    new things and flexibly changing what they think about the world. On
    the other hand, they are much worse at using their knowledge to act in
    a swift, efficient and automatic way. They can learn three languages
    at once but they can't tie their shoelaces.
    This trade-off makes sense from an evolutionary perspective. Our
    species relies more on learning than any other, and has a longer
    childhood than any other. Human childhood is a protected period in
    which we are free to learn without being forced to act. There is even
    some neurological evidence for this. Young children actually have
    substantially more neural connections than adults--more potential to
    put different kinds of information together. With experience, some
    connections are strengthened and many others disappear entirely. As
    the neuroscientists say, we gain conductive efficiency but lose
    plasticity.
    What does this have to do with consciousness? Consider the experiences
    we adults associate with these two kinds of functions. When we know
    how to do something really well and efficiently, we typically lose, or
    at least, reduce, our conscious awareness of that action. We literally
    don't see the familiar houses and streets on the well-worn route home,
    although, of course, in some functional sense we must be visually
    taking them in. In contrast, as adults when we are faced with the
    unfamiliar, when we fall in love with someone new, or when we travel
    to a new place, our consciousness of what is around us and inside us
    suddenly becomes far more vivid and intense. In fact, we are willing
    to expend lots of money, and lots of emotional energy, for those few
    intensely alive days in Paris or Beijing that we will remember long
    after months of everyday life have vanished.
    Similarly, as adults when we need to learn something new, say when we
    learn to skydive, or work out a new scientific idea, or even deal with
    a new computer, we become vividly, even painfully, conscious of what
    we are doing--we need, as we say, to pay attention. As we become
    expert we need less and less attention, and we experience the actual
    movements and thoughts and keystrokes less and less. We sometimes say
    that adults are better at paying attention than children, but really
    we mean just the opposite. Adults are better at not paying attention.
    They're better at screening out everything else and restricting their
    consciousness to a single focus. Again there is a certain amount of
    brain evidence for this. Some brain areas, like the dorsolateral
    prefrontal cortex, consistently light up for adults when they are
    deeply engaged in learning something new. But for more everyday tasks,
    these areas light up much less. For children, though the pattern is
    different--these areas light up even for mundane tasks.
    I think that, for babies, every day is first love in Paris. Every
    wobbly step is skydiving, every game of hide and seek is Einstein in
    1905.
    The astute reader will note that this is just the opposite of what Dan
    Dennett believes but cannot prove. And this brings me to a second
    thing I believe but cannot prove. I believe that the problem of
    capital-C Consciousness will disappear in psychology just as the
    problem of Life disappeared in biology. Instead we'll develop much
    more complex, fine-grained and theoretically driven accounts of the
    connections between particular types of phenomenological experience
    and particular functional and neurological phenomena. The vividness
    and intensity of our attentive awareness, for example, may be
    completely divorced from our experience of a constant first-person I.
    Babies may be more conscious in one way and less in the other. The
    consciousness of pain may be entirely different from the consciousness
    of red which may be entirely different from the babbling stream of
    Joyce and Woolf.

                                                   |[484]back to contents|
    ______________________________________________________________________

    [485]STEVEN PINKER
    Psychologist, Harvard University; Author, The Blank Slate

    [pinker.100.jpg] In 1974, Marvin Minsky wrote that "there is room in
    the anatomy and genetics of the brain for much more mechanism than
    anyone today is prepared to propose." Today, many advocates of
    evolutionary and domain-specific psychology are in fact willing to
    propose the richness of mechanism that Minsky called for thirty years
    ago. For example, I believe that the mind is organized into cognitive
    systems specialized for reasoning about object, space, numbers, living
    things, and other minds; that we are equipped with emotions triggered
    by other people (sympathy, guilt, anger, gratitude) and by the
    physical world (fear, disgust, awe); that we have different ways for
    thinking and feeling about people in different kinds of relationships
    to us (parents, siblings, other kin, friends, spouses, lovers, allies,
    rivals, enemies); and several peripheral drivers for communicating
    with others (language, gesture, facial expression).

    When I say I believe this but cannot prove it, I don't mean that it's
    a matter of raw faith or even an idiosyncratic hunch. In each case I
    can provide reasons for my belief, both empirical and theoretical. But
    I certainly can't prove it, or even demonstrate it in the way that
    molecular biologists demonstrate their claims, namely in a form so
    persuasive that skeptics can't reasonably attack it, and a consensus
    is rapidly achieved. The idea of a richly endowed human nature is
    still unpersuasive to many reasonable people, who often point to
    certain aspects of neuroanatomy, genetics, and evolution that appear
    to speak against it. I believe, but cannot prove, that these
    objections will be met as the sciences progress.

    At the level of neuroanatomy and neurophysiology, critics have pointed
    to the apparent homogeneity of the cerebral cortex and of the seeming
    interchangeability of cortical tissue in experiments in which patches
    of cortex are rewired or transplanted in animals. I believe that the
    homogeneity is an illusion, owing to the fact that the brain is a
    system for information processing. Just as all books look the same to
    someone who does not understand the language in which they are written
    (since they are all composed of different arrangements of the same
    alphanumeric characters), and the DVD's of all movies look the same
    under a microscope, the cortex may look homogeneous to the eye but
    nonetheless contain different patterns of connectivity and synaptic
    biases that allow it to compute very different functions. I believe
    this these differences will be revealed in different patterns of gene
    expression in the developing cortex. I also believe that the apparent
    interchangeability of cortex occurs only in early stages of sensory
    systems that happen to have similar computational demands, such as
    isolating sharp signal transitions in time and space.

    At the level of genetics, critics have pointed to the small number of
    genes in the human genome (now thought to be less than 25,000) and to
    their similarity to those of other animals. I believe that geneticists
    will find that there is a large store of information in the noncoding
    regions of the genome (the so-called junk DNA), whose size, spacing,
    and composition could have large effects on how genes are expressed.
    That is, the genes themselves may code largely for the meat and juices
    of the organism, which are pretty much the same across species,
    whereas how they are sculpted into brain circuits may depend on a much
    larger body of genetic information. I also believe that many examples
    of what we call "the same genes" in different species may differ in
    tiny ways at the sequence level that have large consequences for how
    the organism is put together.

    And at the level of evolution, critics have pointed to how difficult
    it is to establish the adaptive function of a psychological trait. I
    believe this will change as we come to understand the genetic basis of
    psychological traits in more detail. New techniques in genomic
    analysis, which look for statistical fingerprints of selection in the
    genome, will show that many genes involved in cognition and emotion
    were specifically selected for in the primate, and in many cases the
    human, lineage.

                                                   |[486]back to contents|
    ______________________________________________________________________

    [487]JANNA LEVIN
    Physicist, Columbia University; Author, How The Universe Got Its Spots

    [levin.100.jpg] I believe there is an external reality and you are not
    all figments of my imagination. My friend asks me through the steam he
    blows off the surface of his coffee, how I can trust the laws of
    physics back to the origins of the universe. I ask him how he can
    trust the laws of physics down to his cup of coffee. He shows every
    confidence that the scalding liquid will not spontaneously defy
    gravity and fly up in his eyes. He lives with this confidence born of
    his empirical experience of the world. His experiments with gravity,
    heat, and light began in childhood when he palpated the world to test
    its materials. Now he has a refined and well-developed theory of
    physics, whether expressed in equations or not.

    I simultaneously believe more and less than he does. It is rational to
    believe what all of my empirical and logical tests of the world
    confirm--that there is a reality that exists independent of me. That
    the coffee will not fly upwards. But it is a belief nonetheless. Once
    I've gone that far, why stop at the perimeter of mundane experience?
    Just as we can test the temperature of a hot beverage with a tongue,
    or a thermometer, we can test the temperature of the primordial light
    left over from the big bang. One is no less real than the other simply
    because it is remarkable.

    But how do I really know? If I measure the temperature of boiling
    water, all I really know is that mercury climbs a glass tube. Not even
    that, all I really know is that I see mercury climb a glass tube. But
    maybe the image in my mind's eye isn't real. Maybe nothing is real,
    not the mercury, not the glass, not the coffee, not my friend. They
    are all products of a florid imagination. There is no external
    reality, just me. Einstein? My creation. Picasso? My mind's forgery.
    But this solopsism is ugly and arrogant. How can I know that
    mathematics and the laws of physics can be reasoned down to the moment
    of creation of time, space, the entire universe? In the very same way
    that my friend believes in the reality of the second double cappuccino
    he orders. In formulating our beliefs, we are honest and critical and
    able to admit when we are wrong--and these are the cornerstones of
    truth.

    When I leave the café, I believe the room of couches and tables is
    still on the block at 122nd Street, that it is still full of people,
    and that they haven't evaporated when my attention drifts away. But if
    I am wrong and there is no external reality, then not only is this
    essay my invention, but so is the web, edge.org, all of its
    participants and their ingenious ideas. And if you are reading this, I
    have created you too. But if I am wrong and there is no external
    reality, then maybe it is me who is a figment of your imagination and
    the cosmos outside your door is your magnificent creation.

                                                   |[488]back to contents|
    ______________________________________________________________________

    [489]HAIM HARARI
    Physicist, former President, Weizmann Institute of Science

    [harari100.jpg] The electron has been with us for over a century,
    laying the foundations to the electronic revolution and all of
    information technology. It is believed to be a point-like, elementary
    and indivisible particle. Is it?

    The neutrino, more than a million times lighter than the electron, was
    predicted in the 1920's and discovered in the 1950's. It plays a
    crucial role in the creation of the stars, the sun and the heavy
    elements. It is elusive, invisible and weakly interacting. It is also
    considered fundamental and indivisible. Is it?

    Quarks do not exist as free objects, except at extremely tiny
    distances, deep within the confines of the particles which are
    constructed from them. Since the 1960's we believe that they are the
    most fundamental indivisible building blocks of protons, neutrons and
    nuclei. Are they?

    Nature has created two additional, totally unexplained, replicas of
    the electron, the neutrino and the most abundant quarks, u and d,
    forming three "generations" of fundamental particles. Each
    "generation" of particles is identical to the other two in all
    properties, except that the particle masses are radically different.
    Since each "generation" includes four fundamental particles, we end up
    with 12 different particles, which are allegedly indivisible,
    point-like and elementary. Are they?

    The Atom, the nucleus and the proton, each in its own time, were
    considered elementary and indivisible, only to be replaced later by
    smaller objects as the fundamental building blocks. How can we be so
    arrogant as to exclude the possibility that this will happen again?
    Why would nature arbitrarily produce 12 different objects, with a very
    orderly pattern of electric charges and "color forces", with simple
    charge ratios between seemingly unrelated particles (such as the
    electron and the quark) and with a pattern of masses, which appears to
    be taken from the results of a lottery? Doesn't this "smell" again of
    further sub-particle structure?

    There is absolutely no experimental evidence for a further
    substructure within all of these particles. There is no completely
    satisfactory theory which might explain how such light and tiny
    particles can contain objects moving with enormous energies, a
    requirement of quantum mechanics. This is, presumably, why the
    accepted "party line" of particle physicists is to assume that we
    already have reached the most fundamental level of the structure of
    matter.

    For over twenty years, the hope has been that the rich spectrum of
    so-called fundamental particles will be explained as various modes of
    string vibrations and excitations. The astonishingly tiny string or
    membrane, rather than the point-like object, is allegedly at the
    bottom of the ladder describing the structure of matter. However, in
    spite of absolutely brilliant and ingenious mathematical work, not one
    experimental number has been explained in more than twenty years, on
    the basis of the string hypothesis.

    Based on common sense and on an observation of the pattern of the
    known particles, without any experimental evidence and without any
    comprehensive theory, I have believed for many years, and I continue
    to believe, that the electron, the neutrino and the quarks are
    divisible. They are presumably made of different combinations of the
    same small number (two?) of more fundamental sub-particles. The latter
    may or may not have the string structure, and may or may not be
    themselves composites.

    Will we live to see the components of the electron?

                                                   |[490]back to contents|
    ______________________________________________________________________

    [491]PAUL DAVIES
    Physicist, Macquarie University, Sydney; Author, How to Build a Time
    Machine

    [davies100.jpg] One of the biggest of the Big Questions of existence
    is, Are we alone in the universe? Science has provided no convincing
    evidence one way or the other. It is certainly possible that life
    began with a bizarre quirk of chemistry, an accident so improbable
    that it happened only once in the entire observable universe--and we
    are it. On the other hand, maybe life gets going wherever there are
    earthlike planets. We just don't know, because we have a sample of
    only one. However, no known scientific principle suggests an inbuilt
    drive from matter to life. No known law of physics or chemistry favors
    the emergence of the living state over other states. Physics and
    chemistry are, as far as we can tell, "life blind."

    Yet I don't believe that life is a freak event. I think the universe
    is teeming with it. I can't prove it; indeed, it could be that mankind
    will never know the answer for sure. If we find life in our solar
    system, it most likely got there from Earth (or vice versa) in rocks
    kicked off planets by comet impacts. And to go beyond the solar system
    is the stuff of dreams. The best hope is that we develop instruments
    sensitive enough to detect life on extra-solar planets from Earth
    orbit. But, whilst not impossible, this is a formidable technical
    challenge.

    So why do I think we are not alone, when we have no evidence for life
    beyond Earth? Not for the fallacious popular reason: "the universe is
    so big there must be life out there somewhere." Simple statistics
    shows this argument to be bogus. If life is in fact a freak chemical
    event, it would be so unlikely to occur that it wouldn't happen twice
    among a trillion trillion trillion planets. Rather, I believe we are
    not alone because life seems to be a fundamental, and not merely an
    incidental, property of nature. It is built into the great cosmic
    scheme at the deepest level, and therefore likely to be pervasive. I
    make this sweeping claim because life has produced mind, and through
    mind, beings who do not merely observe the universe, but have come to
    understand it through science, mathematics and reasoning. This is
    hardly an insignificant embellishment on the cosmic drama, but a
    stunning and unexpected bonus. Somehow life is able to link up with
    the basic workings of the cosmos, resonating with the hidden
    mathematical order that makes it tick. And that's a quirk too far for
    me.

                                                   |[492]back to contents|
    ______________________________________________________________________

    [493]KEVIN KELLY
    Editor-At-Large, Wired; Author, New Rules for the New Economy

    [kelly100.jpg] The orthodoxy in biology states that every cell in your
    body shares exactly the same DNA. It's your identity, your indelible
    fingerprint, and since all the cells in your body have been duplicated
    from your initial unique stem cell these zillion of offspring cells
    all maintain your singular DNA sequence. It follows then that when you
    submit a tissue sample for genetic analysis it doesn't matter where it
    comes from. Normally technicians grab some from the easily accessible
    pars of your mouth, but they could just as well take some from your
    big toe, or your liver, or eyelash and get the same results.
    I believe, but cannot prove, that the DNA in your body (and all
    bodies) varies from part to part. I make this prediction based on what
    we know about biology, which is that natures abhors uniformity. No
    where else in nature do we see identity maintained to such exactness.
    No where else is there such fixity.
    I do not expect intra-soma variation to diverge very much. Indeed the
    genetic variation among individual humans is already relatively mild,
    among the least of all animals, so the diversity within a human body
    is unlikely to be greater than among human bodies--although that may
    be possible. More likely, intra-soma variation will be less than
    racial diversity but greater than zero.
    Biologists already know (even if the public doesn't) that the full
    sequence of DNA in your cells changes over time as your chromosomes
    are shorten each time they divide in growth. Because of a bug in the
    system, DNA is unable to duplicate itself when it gets to the very
    very tip of its chain, so at each division it winds up a few hundred
    bases short. This slight reduction after each of the cell's scores of
    divisions is currently seen as the chief culprit in cell death and
    thus your own death. But the variation I believe is happening is more
    fundamental. My guess is that DNA mutates in a population of the cells
    in your body much as it does in a population of bodies.
    The consequences are more than just curious. At the trivial end, if my
    belief were true, it would matter where you selected to sample your
    DNA from. And it might also affect when you get it, as this variation
    could change over time. If true, this variation might have some effect
    on locating the correct seminal cells for growing replacement organs
    and tissues.

    While I have no evidence for my belief right now, it is a provable
    assertion. It will be shown to be true or false as soon as we have
    ubiquitous cheap full-genome sequences at discount mall prices. That
    is, pretty soon. I believe that once we have a constant reading of our
    individual full DNA (many times over our lives) we will have no end of
    surprises. I would not be surprised to discover that pet owners
    accumulate some tiny fragments of their pet's DNA,which has somehow
    been laterally transferred via viruses to their own cellular DNA. Or
    that diary farmers amass noticeable fragments of bovine DNA. Or that
    the DNA in our limbs somehow drift genetically in a "limby" way,
    distinct from the variation in the cells in our nervous systems.
    But I consider all this minor compared to a possible major
    breakthrough in understanding. We have a pretty good idea of how the
    "selection" in natural selection works: less fit organisms die. But
    when it comes to understanding how variation arises in Darwinian
    evolution all we can say is "random mutation" which is another way of
    saying "we don't know exactly." If there were intra-somatic variation
    and if we could easily observe it via massive constant full-genome
    sequencing then we might be able to figure out exactly how a mutation
    occurs, and whether there are patterns to those mutations, and to what
    extant such variation is induced or influenced by the body or the
    environment--all ideas which currently challenge the Darwinian wisdom
    that the body does not directly influence the genetic makeup of a
    cell. Monitoring genetic drift within a body may be a window into the
    origins of mutation itself.
    Even if these larger ideas don't pan out, the simple fact that DNA in
    each cell of your body is not 700 identical would be worth
    investigating. Such a fact would be a surprise, except to me.

                                                   |[494]back to contents|
    ______________________________________________________________________

    [495]PHILIP W. ANDERSON
    Physicist and Nobel laureate, Princeton University

    [anderson100.jpg] Is string theory a futile exercise as physics, as I
    believe it to be? It is an interesting mathematical specialty and has
    produced and will produce mathematics useful in other contexts, but it
    seems no more vital as mathematics than other areas of very abstract
    or specialized math, and doesn't on that basis justify the incredible
    amount of effort expended on it.
    My belief is based on the fact that string theory is the first science
    in hundreds of years to be pursued in pre-Baconian fashion, without
    any adequate experimental guidance. It proposes that Nature is the way
    we would like it to be rather than the way we see it to be; and it is
    improbable that Nature thinks the same way we do.
    The sad thing is that, as several young would-be theorists have
    explained to me, it is so highly developed that it is a full-time job
    just to keep up with it. That means that other avenues are not being
    explored by the bright, imaginative young people, and that alternative
    career paths are blocked.

                                                   |[496]back to contents|
    ______________________________________________________________________

    [497]STEPHEN KOSSLYN
    Psychologist, Harvard University; Author, Wet Mind
    [kosslyn100.jpg] Mental processes: An out-of-body existence?

    These days, it seems obvious that the mind arises from the b rain (not
    the heart, liver, or some other organ). In fact, I personally have
    gone so far as to claim that "the mind is what the brain does." But
    this notion does not preclude an unconventional idea: Your mind may
    arise not simply from your own brain, but in part from the brains of
    other people.
    Let me explain. This idea rests on three key observations.

    The first is that our brains are limited, and so we use crutches to
    supplement and extend our abilities. For example, try to multiply 756
    by 312 in your head. Difficult, right? You would be happier with a
    pencil and piece of paper--or, better yet, an electronic calculator.
    These devices serve as prosthetic systems, making up for cognitive
    deficiencies (just as a wooden leg would make up for a physical
    deficiency).
    The second observation is that the major prosthetic system we use is
    other people. We set up what I call "Social Prosthetic Systems"
    (SPSs), in which we rely on others to extend our reasoning abilities
    and to help us regulate and constructively employ our emotions. A good
    marriage may arise in part because two people can serve as effective
    SPSs for each other.
    The third observation is that a key element of serving as a SPS is
    learning how best to help someone. Others who function as your SPSs
    adapt to your particular needs, desires and predilections. And the act
    of learning changes the brain. By becoming your SPS, a person
    literally lends you part of his or her brain!
    In short, parts of other people's brains come to serve as extensions
    of your own brain. And if the mind is "what the brain does," then your
    mind in fact arises from the activity of not only your own brain, but
    those of your SPSs.
    There are many implications of these ideas, ranging from reasons why
    we behave in certain ways toward others to foundations of ethics and
    even to religion. In fact, one could even argue that when your body
    dies, part of your mind may survive. But before getting into such dark
    and dusty corners, it would be nice to have firm footing--to collect
    evidence that these speculations are in fact worth taking seriously.

                                                   |[498]back to contents|
    ______________________________________________________________________

    [499]JOSEPH LEDOUX
    Neuroscientist, New York University; Author, The Synaptic Self

    [ledoux100.jpg] For me, this is an easy question. I believe that
    animals have feelings and other states of consciousness, but neither
    I, nor anyone else, has been able to prove it. We can't even prove
    that other people are conscious, much less other animals. In the case
    of other people, though, we at least can have a little confidence
    since all people have brains with the same basic configurations. But
    as soon as we turn to other species and start asking questions about
    feelings, and consciousness in general, we are in risky territory
    because the hardware is different.

    When a rat is in danger, it does things that many other animals do.
    That is, it either freezes, runs away or fights back. People pretty
    much do the same things. Some scientists say that because a rat and a
    person act the same in similar situations, they have the same kinds of
    subjective experiences. I don't think we can really say this.

    There are two aspects of brain hardware that make it difficult for us
    to generalize from our personal subjective experiences to the
    experiences of other animals. One is the fact that the circuits most
    often associated with human consciousness involve the lateral
    prefrontal cortex (via its role in working memory and executive
    control functions). This broad zone is much more highly developed in
    people than in other primates, and whether it exists at all in
    non-primates is questionable. So certainly for those aspects of
    consciousness that depend on the prefrontal cortex, including aspects
    that allow us to know who we are and to make plans and decisions,
    there is reason to believe that even other primates might be different
    than people. The other aspect of the brain that differs dramatically
    is that humans have natural language. Because so much of human
    experience is tied up with language, consciousness is often said to
    depend on language. If so, then most other animals are ruled out of
    the consciousness game. But even if consciousness doesn't depend on
    language, language certainly changes consciousness so that whatever
    consciousness another animal has it is likely to differ from most of
    our states of consciousness.

    For these reasons, I think it is hard to know what consciousness might
    be like in another animal. If we can't measure it (because it is
    internal and subjective) and can't use our own experience to frame
    questions about it (because the hardware that makes it possible is
    different), it become difficult to study.

    Most of what I have said applies mainly to the content of conscious
    experience. But there is another aspect of consciousness that is less
    problematic scientifically. It is possible to study the processes that
    make consciousness possible even if we can't study the content of
    consciousness in other animals. This is exactly what is done in
    studies of working memory in non-human primates. One approach by that
    has had some success in the area of conscious content in non-human
    primates has focused on a limited kind of consciousness, visual
    awareness. But this approach, by Koch and Crick, mainly gets at the
    neural correlates of consciousness rather than the causal mechanisms.
    The correlates and the mechanisms may be the same, but they may not.
    Interestingly, this approach also emphasizes the importance of
    prefrontal cortex in making visual awareness possible.

    So what about feelings? My view is that a feeling is what happens when
    an emotion system, like the fear system, is active in a brain that can
    be aware of its own activities. That is, what we call "fear" is the
    mental state that we are in when the activity of the defense system of
    the brain (or the consequences of its activity, such as bodily
    responses) is what is occupying working memory. Viewed this way,
    feelings are strongly tied to those areas of the cortex that are
    fairly unique to primates and especially well developed in people.
    When you add natural language to the brain, in addition to getting
    fairly basic feelings you also get fine gradations due to the ability
    to use words and grammar to discriminate and categorize states and to
    attribute them not just to ourselves but to others.

    There are other views about feelings. Damasio argues that feelings are
    due to more primitive activity in body sensing areas of the cortex and
    brainstem. Pankseep has a similar view, though he focuses more on the
    brainstem. Because this network has not changed much in the course of
    human evolution, it could therefore be involved in feelings that are
    shared across species. I don't object to this on theoretical grounds,
    but I don't think it can be proven because feelings can't be measured
    in other animals. Pankseep argues that if it looks like fear in rats
    and people, it probably feels like fear in both species. But how do
    you know that rats and people feel the same when they behave the same?
    A cockroach will escape from danger--does it, too, feel fear as it
    runs away? I don't think behavioral similarity is sufficient grounds
    for proving experiential similarity. Neural similarity helps--rats and
    people have similar brainstems, and a roach doesn't even have a brain.
    But is the brainstem responsible for feelings? Even if it were proven
    in people, how would you prove it in a rat?

    So now we're back where we started. I think rats and other mammals,
    and maybe even roaches (who knows?), have feelings. But I don't know
    how to prove it. And because I have reason to think that their
    feelings might be different than ours, I prefer to study emotional
    behavior in rats rather than emotional feelings. I study rats because
    you can make progress at the neural level, provided that the thing you
    measure is the same in rats and people. I wouldn't study language and
    consciousness in rats, so I don't study feelings either, because I
    don't know that they exist. I may be accused of being short-sighted
    for this, but I'd rather make progress on something I can study in
    rats than beat my head against the consciousness wall in these
    creatures.

    There's lots to learn about emotion through rats that can help people
    with emotional disorders. And there's lots we can learn about feelings
    from studying humans, especially now that we have powerful function
    imaging techniques. I'm not a radical behaviorist. I'm just a
    practical emotionalist.

                                                   |[500]back to contents|
    ______________________________________________________________________

    [501]NEIL GERSHENFELD
    Physicist, MIT; Author, When Things Start to Think

    [gershenfeld100.jpg] What do you believe is true even though you
    cannot prove it?

    Progress.
    The enterprise that employs me, seeking to understand and apply
    insight into how the world works, is ultimately based on the belief
    that this is a good thing to do. But it's something of a leap of faith
    to believe that that will leave the world a better place--the evidence
    to date is mixed for technical advances monotonically mapping onto
    human advances.

    Naturally, this question has a technical spin for me. My current
    passion is the creation of tools for personal fabrication based on
    additive digital assembly, so that the uses of advanced technologies
    can be defined by their users. It's still no more than an assumption
    that that will lead to more good things than bad things being made,
    but, like the accumulated experience that democracy works better than
    monarchy, I have more faith in a future based on widespread access to
    the means for invention than one based on technocracy.

                                                   |[502]back to contents|
    ______________________________________________________________________

    [503]LAWRENCE KRAUSS
    Physicist, Case Western Reserve University; Author, Atom
    [krauss100.jpg] I believe our universe is not unique. As science has
    evolved, our place within the universe has continued to diminish in
    significance.

    First it was felt that the Earth was the center of the universe, then
    that our Sun was the center, and so on. Ultimately we now realize that
    we are located at the edge of a random galaxy that is itself located
    nowhere special in a large, potentially infinite universe full of
    other galaxies. Moreover, we now know that even the stars and visible
    galaxies themselves are but an insignificant bit of visible pollution
    in a universe that is otherwise dominated by 'stuff' that doesn't
    shine.

    Dark matter dominates the masses of galaxies and clusters by a factor
    of 10 compared to normal matter. And now we have discovered that even
    matter itself is almost insignificant. Instead empty space itself
    contains more than twice as much energy as that associated with all
    matter, including dark matter, in the universe. Further, as we ponder
    the origin of our universe, and the nature of the strange dark energy
    that dominates it, every plausible theory that I know of suggests that
    the Big Bang that created our visible universe was not unique. There
    are likely to be a large, and possibly infinite number of other
    universes out there, some of which may be experiencing Big Bangs at
    the current moment, and some of which may have already collapsed
    inward into Big Crunches. From a philosophical perspective this may be
    satisfying to some, who find a universe with a definite beginning but
    no definite end dissatisfying. In this case, in the 'metaverse', or
    'multiverse' things may seem much more uniform in time.

    At every instant there may be many universes being born, and others
    dying. But philosophy aside, the existence of many different causally
    disconnected universes--regions with which we will never ever be able
    to have direct communication, and thus which will forever be out of
    reach of direct empirical verification--may have significant impacts
    on our understanding of our own universe. Their existence may help
    explain why our own universe has certain otherwise unexpected
    features, because in a metaverse with a possibly infinite number of
    different universes, which may themselves vary in their fundamental
    features, it could be that life like our own would evolve in only
    universes with a special set of characteristics.

    Whether or not this anthropic type of argument is necessary to
    understand our universe--and I personally hope it isn't--I
    nevertheless find it satisfying to think that it is likely that not
    only are we not located in a particularly special place in our
    universe, but that our universe itself may be relatively insignificant
    on a larger cosmic scale. It represents perhaps the ultimate
    Copernican Revolution.

                                                   |[504]back to contents|
    ______________________________________________________________________

    [505]WILLIAM CALVIN
    Neurobiologist, University of Washington; Author, A Brief History of
    the Mind
    [calvin100.jpg] Dan Dennett has it right in his comments below when he
    puts the emphasis on acquiring language, not having language, as a
    precondition for our kind of consciousness. For what it's worth, I
    have some (likely unproveable) beliefs about why the preschooler's
    acquisition of a structured language is so important for all the rest
    of her higher intellectual function. Besides syntax, intellect
    includes structured stuff such as multistage contingent planning,
    chains of logic, games with arbitrary rules, and our passion for
    discovering "how things hang together."
    Many animals have some version of a critical period for tuning up
    sensory perception. Humans also seem to have one for structured
    language, judging from the experience with the deaf children of
    hearing parents who are not exposed to a rich sign language during the
    preschool years. Oliver Sacks in "Seeing Voices" described an
    11-year-old boy who had been thought to be retarded but proved to be
    merely deaf. After a year of ASL instruction, Sacks interviewed him:

      "Joseph saw, distinguished, categorized, used; he had no problems
      with perceptual categorization or generalization, but he could not,
      it seemed, go much beyond this, hold abstract ideas in mind,
      reflect, play, plan. He seemed completely literal--unable to juggle
      images or hypotheses or possibilities, unable to enter an
      imaginative or figurative realm.... He seemed, like an animal, or
      an infant, to be stuck in the present, to be confined to literal
      and immediate perception..."

    In the first year, an infant is busy creating categories for the
    speech sounds she hears. By the second year, the toddler is busy
    picking up new words, each composed of a series of those phoneme
    building blocks. In the third year, she starts picking up on those
    typical combinations of words that we call grammar or syntax. She soon
    graduates to speaking long structured sentences. In the fourth year,
    she infers a patterning to the sentences and starts demanding proper
    endings for her bedtime stories. It is pyramiding, using the building
    blocks at the immediately subjacent level. Four levels in four years!
    These years see a lot of softwiring via the pruning and enhancement of
    the prenatal connections between cortical neurons, partly on the basis
    of how useful a connection has been so far in life. Some such
    connections help you assemble a novel combination of words, check them
    for nonsense via some sort of quality control, and then--mirabile
    dictu--speak a sentence you've never uttered before. Some must be in
    workspaces that could plan not only sentences but an agenda for the
    weekend or a chain of logic or check out a chess move before you make
    it--even be tickled by structured music with its multiple interwoven
    melodies.

    Then tuning up the workspace for structured language in the preschool
    years would likely carry over to those other structured aspects of
    intellect. That's why I like the emphasis on acquiring language as a
    precondition for consciousness: tuning up to sentence structure might
    make the child better able to perform at nonlanguage tasks which also
    need some structuring. Improve one, improve them all?

    Is that what boosts our cleverness and intelligence? Is "our kind of
    consciousness" nothing but structured intellect with good quality
    control? Can't prove it, but it sure looks like a good candidate.

                                                   |[506]back to contents|
    ______________________________________________________________________

    [507]DANIEL C. DENNETT
    Philosopher, Tufts University Author, Freedom Evolves
    [dennett100.jpg] I believe, but cannot yet prove, that acquiring a
    human language (an oral or sign language) is a necessary precondition
    for consciousness-in the strong sense of there being a subject, an I,
    a 'something it is like something to be.' It would follow that
    non-human animals and pre-linguistic children, although they can be
    sensitive, alert, responsive to pain and suffering, and cognitively
    competent in many remarkable ways-including ways that exceed normal
    adult human competence-are not really conscious (in this strong
    sense): there is no organized subject (yet) to be the enjoyer or
    sufferer, no owner of the experiences as contrasted with a mere
    cerebral locus of effects.

    This assertion is shocking to many people, who fear that it would
    demote animals and pre-linguistic children from moral protection, but
    this would not follow. Whose pain is the pain occurring in the newborn
    infant? There is not yet anybody whose pain it is, but that fact would
    not license us to inflict painful stimuli on babies or animals any
    more than we are licensed to abuse the living bodies of people in
    comas who are definitely not conscious. If selfhood develops
    gradually, then certain types of events only gradually become
    experiences, and there will be no sharp line between unconscious pains
    (if we may call them that) and conscious pains, and both will merit
    moral attention. (And, of course, the truth of the empirical
    hypothesis is in any case strictly independent of its ethical
    implications, whatever they are. Those who shun the hypothesis on
    purely moral grounds are letting wishful thinking overrule a properly
    inquisitive scientific attitude. I am happy to give animals and small
    children "the benefit of the doubt" for moral purposes, but not for
    scientific purposes. Those who are shocked by my hypothesis should
    pause, if they can bear it, to notice that it is as just as difficult
    to prove its denial as its assertion. But it can, I think, be proven
    eventually. Here's what it will take, one way or the other:

      (1) a well-confirmed model of the functional architecture of adult
      human consciousness, showing how long-distance pathways of
      re-entrant or reverberant interactions have to be laid down and
      sustained by the sorts of self-stimulation cascades that entrain
      language use;

      (2) an interpretation of the dynamics of the model that explains
      why, absent these well-traveled pathways of neural micro habit,
      there is no functional unity to the nervous system-no unity to
      distinguish an I from a we (or a multitude) as the candidate
      subject(s) subserved by that nervous system;

      (3) a host of further experimental work demonstrating the
      importance of what Thomas Metzinger calls the phenomenal model of
      the intentionality relation (PMIR) in enabling the sorts of
      experiences we consider central to our own adult consciousness.
      This work will demonstrate that animal cleverness never requires
      the abilities thus identified in humans, and that animals are in
      fact incapable of appreciating many things we normally take for
      granted as aspects of our conscious experience.

    This is an empirical hypothesis, and it could just as well be proven
    false. It could be proven false by showing that in fact the necessary
    pathways functionally uniting the relevant brain systems (in the ways
    I claim are required for consciousness) are already provided in normal
    infant or fetal development, and are in fact present in, say, all
    mammalian nervous systems of a certain maturity. I doubt that this is
    true because it seems clear to me that evolution has already
    demonstrated that remarkable varieties of adaptive coordination can be
    accomplished without such hyper-unifying meta-systems, by colonies of
    social insects, for instance. What is it like to be an ant colony?
    Nothing, I submit, and I think most would agree intuitively. What is
    it like to be a brace of oxen? Nothing (even if it is like something
    to be a single ox). But then we have to take seriously the extent to
    which animals-not just insect colonies and reptiles, but rabbits,
    whales, and, yes, bats and chimpanzees-can get by with somewhat
    disunified brains.

    Evolution will not have provided for the further abilities where they
    were not necessary for members of these species to accomplish the
    tasks their lives actually pose them. If animals were like the
    imaginary creatures in the fictions of Beatrix Potter or Walt Disney,
    they would have to be conscious pretty much the way we are. But
    animals are more different from us than we usually imagine, enticed as
    we are by these charming anthropomorphic fictions. We need these
    abilities to become persons, communicating individuals capable of
    asking and answering, requesting and forbidding and promising (and
    lying). But we don't need to be born with these abilities, since
    normal rearing will entrain the requisite neural dispositions. Human
    subjectivity, I am proposing, is thus a remarkable byproduct of human
    language, and no version of it should be extrapolated to any other
    species by default, any more than we should assume that the
    rudimentary communication systems of other species have verbs and
    nouns, prepositions and tenses.

    Finally, since there is often misunderstanding on this score, I am not
    saying that all human consciousness consists in talking to oneself
    silently, although a great deal of it does. I am saying that the
    ability to talk to yourself silently, as it develops, also brings
    along with it the abilities to review, to muse, to rehearse,
    recollect, and in general engage the contents of events in one's
    nervous system that would otherwise have their effects in a purely
    "ballistic" fashion, leaving no memories in their wake, and hence
    contributing to one's guidance in ways that are well described as
    unconscious. If a nervous system can come to sustain all these
    abilities without having language then I am wrong.

                                                   |[508]back to contents|
    ______________________________________________________________________

    [509]GEORGE B. DYSON
    Science Historian; Author, Project Orion
    [dysong.100.jpg]

    Interspecies coevolution of languages on the Northwest Coast.

    During the years I spent kayaking along the coast of British Columbia
    and Southeast Alaska, I observed that the local raven populations
    spoke in distinct dialects, corresponding surprisingly closely to the
    geographic divisions between the indigenous human language groups.
    Ravens from Kwakiutl, Tsimshian, Haida, or Tlingit territory sounded
    different, especially in their characteristic "tok" and "tlik."

    I believe this correspondence between human language and raven
    language is more than coincidence, though this would be difficult to
    prove.

                                                   |[510]back to contents|
    ______________________________________________________________________

    [511]DANIEL GILBERT
    Psychologist, Harvard University
    [gilbert100.jpg] In the not too distant future, we will be able to
    construct artificial systems that give every appearance of
    consciousness--systems that act like us in every way. These systems
    will talk, walk, wink, lie, and appear distressed by close elections.
    They will swear up and down that they are conscious and they will
    demand their civil rights. But we will have no way
    to know whether their behavior is more than a clever trick--more than
    the pecking of a pigeon that has been trained to type "I am, I am!"

    We take each other's consciousness on faith because we must, but after
    two thousand years of worrying about this issue, no one has ever
    devised a definitive test of its existence. Most cognitive scientists
    believe that consciousness is a phenomenon that emerges from the
    complex interaction of decidedly nonconscious parts (neurons), but
    even when we finally understand the nature of that complex
    interaction, we still won't be able to prove that it produces the
    phenomenon in question. And yet, I haven't the slightest doubt that
    everyone I know has an inner life, a subjective experience, a sense of
    self, that is very much like mine.

    What do I believe is true but cannot prove? The answer is: You!

                                                   |[512]back to contents|
    ______________________________________________________________________

    [513]MARC D. HAUSER
    Psychologist, Harvard University: Author, Wild Minds
    [hauser100.jpg] What makes humans uniquely smart?

    Here's my best guess: we alone evolved a simple computational trick
    with far reaching implications for every aspect of our life, from
    language and mathematics to art, music and morality. The trick: the
    capacity to take as input any set of discrete entities and recombine
    them into an infinite variety of meaningful expressions.

    Thus, we take meaningless phonemes and combine them into words, words
    into phrases, and phrases into Shakespeare. We take meaningless
    strokes of paint and combine them into shapes, shapes into flowers,
    and flowers into Matisse's water lilies. And we take meaningless
    actions and combine them into action sequences, sequences into events,
    and events into homicide and heroic rescues.

    I'll go one step further: I bet that when we discover life on other
    planets, that although the materials may be different for running the
    computation, that they will create open ended systems of expression by
    means of the same trick, thereby giving birth to the process of
    universal computation.

                                                   |[514]back to contents|
    ______________________________________________________________________

    [515]NICHOLAS HUMPHREY
    Psychologist, London School of Economics; Author, The Mind Made Flesh
    [humphrey100.jpg] I believe that human consciousness is a conjuring
    trick, designed to fool us into thinking we are in the presence of an
    inexplicable mystery. Who is the conjuror and why is s/he doing it?
    The conjuror is natural selection, and the purpose has been to bolster
    human self-confidence and self-importance--so as to increase the value
    we each place on our own and others' lives.

    If this is right, it provides a simple explanation for why we, as
    scientists or laymen, find the "hard problem" of consciousness just so
    hard. Nature has meant it to be hard. Indeed "mysterian"
    philosophers--from Colin McGinn to the Pope--who bow down before the
    apparent miracle and declare that it's impossible in principle to
    understand how consciousness could arise in a material brain, are
    responding exactly as Nature hoped they would, with shock and awe.

    Can I prove it? It's difficult to prove any adaptationist account of
    why humans experience things the way they do. But here there is an
    added catch. The Catch-22 is that, just to the extent that Nature has
    succeeded in putting consciousness beyond the reach of rational
    explanation, she must have undermined the very possibility of showing
    that this is what she's done.

    But nothing's perfect. There may be a loophole. While it may seem--and
    even be--impossible for us to explain how a brain process could have
    the quality of consciousness, it may not be at all impossible to
    explain how a brain process could (be designed to) give rise to the
    impression of having this quality. (Consider: we could never explain
    why 2 + 2 = 5, but we might relatively easily be able to explain why
    someone should be under the illusion that 2 + 2 = 5).

    Do I want to prove it? That's a difficult one. If the belief that
    consciousness is a mystery is a source of human hope, there may be a
    real danger that exposing the trick could send us all to hell.

                                                   |[516]back to contents|
    ______________________________________________________________________

    [517]HOWARD GARDNER
    Psychologist, Harvard University; Author, Changing Minds
    [gardner100.jpg] The Brain Basis of Talent

    I believe that human talents are based on distinct patterns of brain
    connectivity. These patterns can be observed as the individual
    encounters and ultimately masters an organized activity or domain in
    his/her culture.
    Consider three competing accounts:

      #1 Talent is a question of practice. We could all become Mozarts or
      Einsteins if we persevered.
      #2 Talents are fungible. A person who is good in one thing could be
      good in everything.
      #3 The basis of talents is genetic. While true, this account
      misleadingly implies that a person with a "musical gene" will
      necessarily evince her musicianship, just as she evinces her eye
      color or, less happily, Huntington's disease.

    My Account: The most apt analogy is language learning. Nearly all of
    us can easily master natural languages in the first years of life. We
    might say that nearly all of us are talented speakers. An analogous
    process occurs with respect to various talents, with two differences:

      1. There is greater genetic variance in the potential to evince
      talent in areas like music, chess, golf, mathematics, leadership,
      written (as opposed to oral) language, etc.
      2. Compared to language, the set of relevant activities is more
      variable within and across cultures. Consider the set of games. A
      person who masters chess easily in culture l, would not necessarily
      master poker or 'go' in culture 2.

    As we attempt to master an activity, neural connections of varying
    degrees of utility or disutility form. Certain of us have nervous
    systems that are predisposed to develop quickly along the lines needed
    to master specific activities (chess) or classes of activities
    (mathematics) that happen to be available in one or more cultures.
    Accordingly, assuming such exposure, we will appear talented and
    become experts quickly. The rest of us can still achieve some
    expertise, but it will take longer, require more effective teaching,
    and draw on intellectual faculties and brain networks that the
    talented person does not have to use.
    This hypothesis is currently being tested by Ellen Winner and
    Gottfried Schlaug. These investigators are imaging the brains of young
    students before they begin music lessons and for several years
    thereafter. They also are imaging control groups and administering
    control (non-music) tasks. After several years of music lessons,
    judges will determine which students have musical "talent." The
    researchers will document the brains of musically talented children
    before training, and how these brains develop.

    If Account #1 is true, hours of practice will explain all. If #2 is
    true, those best at music should excel at all activities. If #3 is
    true, individual brain differences should be observable from the
    start. If my account is true, the most talented students will be
    distinguished not by differences observable prior to training but
    rather by the ways in which their neural connections alter during the
    first years of training.

                   [519]John Brockman, Editor and Publisher
                 [520]Russell Weinberger, Associate Publisher
                        [521]contact: editor at edge.org

References

  280. http://edge.org/3rd_culture/bios/mcewan.html
  281. http://edge.org/q2005/q05_easyprint.html#participants
  282. http://edge.org/3rd_culture/bios/trivers.html
  283. http://edge.org/q2005/q05_easyprint.html#participants
  284. http://edge.org/3rd_culture//bios/wilmut.html
  285. http://edge.org/q2005/q05_easyprint.html#participants
  286. http://edge.org/3rd_culture/bios/zeilinger.html
  287. http://edge.org/q2005/q05_easyprint.html#participants
  288. http://edge.org/3rd_culture/bios/diamond.html
  289. http://edge.org/q2005/q05_easyprint.html#participants
  290. http://edge.org/3rd_culture/bios/goleman.html
  291. http://edge.org/q2005/q05_easyprint.html#participants
  292. http://edge.org/3rd_culture/bios/hearst.html
  293. http://edge.org/q2005/q05_easyprint.html#participants
  294. http://edge.org/3rd_culture/bios/taylor.html
  295. http://edge.org/q2005/q05_easyprint.html#participants
  296. http://edge.org/3rd_culture/bios/nesse.html
  297. http://edge.org/q2005/q05_easyprint.html#participants
  298. http://edge.org/3rd_culture/bios/schneider.html
  299. http://edge.org/q2005/q05_easyprint.html#participants
  300. http://edge.org/3rd_culture/bios/goodwin.html
  301. http://edge.org/q2005/q05_easyprint.html#participants
  302. http://edge.org/3rd_culture/bios/sejnowski.html
  303. http://edge.org/q2005/q05_easyprint.html#participants
  304. http://edge.org/q2005/q05_easyprint.html#participants
  305. http://edge.org/3rd_culture/bios/morton.html
  306. http://edge.org/q2005/q05_easyprint.html#participants
  307. http://edge.org/3rd_culture/bios/steinhardt.html
  308. http://edge.org/q2005/q05_easyprint.html#participants
  309. http://edge.org/3rd_culture/bios/winner.html
  310. http://edge.org/q2005/q05_easyprint.html#participants
  311. http://edge.org/3rd_culture/bios/mandelbrot.html
  312. http://edge.org/q2005/q05_easyprint.html#participants
  313. http://edge.org/3rd_culture/bios/dehaene.html
  314. http://edge.org/q2005/q05_easyprint.html#participants
  315. http://edge.org/3rd_culture/bios/norrentranders.html
  316. http://edge.org/q2005/q05_easyprint.html#participants
  317. http://edge.org/3rd_culture/bios/giddings.html
  318. http://edge.org/q2005/q05_easyprint.html#participants
  319. http://edge.org/3rd_culture/bios/rheingold.html
  320. http://edge.org/q2005/q05_easyprint.html#participants
  321. http://edge.org/3rd_culture/bios/chalupa.html
  322. http://edge.org/q2005/q05_easyprint.html#participants
  323. http://edge.org/3rd_culture/bios/rovelli.html
  324. http://edge.org/q2005/q05_easyprint.html#participants
  325. http://edge.org/3rd_culture/bios/mccarthy.html
  326. http://edge.org/q2005/q05_easyprint.html#participants
  327. http://edge.org/3rd_culture/bios/odonnell.html
  328. http://edge.org/q2005/q05_easyprint.html#participants
  329. http://edge.org/3rd_culture/bios/mccorduck.html
  330. http://edge.org/q2005/q05_easyprint.html#participants
  331. http://edge.org/3rd_culture/bios/rees.html
  332. http://edge.org/q2005/q05_easyprint.html#participants
  333. http://edge.org/3rd_culture/bios/porco.html
  334. http://edge.org/q2005/q05_easyprint.html#participants
  335. http://edge.org/3rd_culture/bios/simonyi.html
  336. http://edge.org/q2005/q05_easyprint.html#participants
  337. http://edge.org/3rd_culture/bios/andersonw.html
  338. http://edge.org/q2005/q05_easyprint.html#participants
  339. http://edge.org/3rd_culture/bios/huber-dyson.html
  340. http://edge.org/q2005/q05_easyprint.html#participants
  341. http://edge.org/3rd_culture/bios/rushkoff.html
  342. http://edge.org/q2005/q05_easyprint.html#participants
  343. http://edge.org/3rd_culture/bios/rucker.html
  344. http://edge.org/q2005/q05_easyprint.html#participants
  345. http://edge.org/3rd_culture/bios/sheldrake.html
  346. http://edge.org/q2005/q05_easyprint.html#participants
  347. http://edge.org/3rd_culture/bios/finn.html
  348. http://edge.org/q2005/q05_easyprint.html#participants
  349. http://edge.org/3rd_culture/bios/block.html
  350. http://edge.org/q2005/q05_easyprint.html#participants
  351. http://edge.org/3rd_culture/bios/goldstein.html
  352. http://edge.org/q2005/q05_easyprint.html#participants
  353. http://edge.org/3rd_culture/bios/haidt.html
  354. http://edge.org/q2005/q05_easyprint.html#participants
  355. http://edge.org/3rd_culture/bios/williamson.html
  356. http://edge.org/q2005/q05_easyprint.html#participants
  357. http://edge.org/3rd_culture/bios/lloyd.html
  358. http://edge.org/q2005/q05_easyprint.html#participants
  359. http://edge.org/3rd_culture/bios/nowak.html
  360. http://edge.org/q2005/q05_easyprint.html#participants
  361. http://edge.org/3rd_culture/bios/hillis.html
  362. http://edge.org/q2005/q05_easyprint.html#participants
  363. http://edge.org/3rd_culture/bios/provine.html
  364. http://edge.org/q2005/q05_easyprint.html#participants
  365. http://edge.org/3rd_culture/bios/bloom.html
  366. http://edge.org/q2005/q05_easyprint.html#participants
  367. http://edge.org/3rd_culture/bios/zimbardo.html
  368. http://edge.org/q2005/q05_easyprint.html#participants
  369. http://edge.org/3rd_culture/bios/andersona.html
  370. http://edge.org/q2005/q05_easyprint.html#participants
  371. http://edge.org/3rd_culture/bios/wertheim.html
  372. http://edge.org/q2005/q05_easyprint.html#participants
  373. http://edge.org/3rd_culture/bios/ford.html
  374. http://edge.org/q2005/q05_easyprint.html#participants
  375. http://edge.org/3rd_culture/bios/hoffman.html
  376. http://edge.org/q2005/q05_easyprint.html#participants
  377. http://edge.org/3rd_culture/bios/dutton.html
  378. http://edge.org/q2005/q05_easyprint.html#participants
  379. http://edge.org/3rd_culture/bios/myers.html
  380. http://edge.org/q2005/q05_easyprint.html#participants
  381. http://edge.org/3rd_culture/bios/dysone.html
  382. http://edge.org/q2005/q05_easyprint.html#participants
  383. http://edge.org/3rd_culture/bios/buss.html
  384. http://edge.org/q2005/q05_easyprint.html#participants
  385. http://edge.org/3rd_culture/bios/spiropulu.html
  386. http://edge.org/q2005/q05_easyprint.html#participants
  387. http://edge.org/3rd_culture/bios/venter.html
  388. http://edge.org/q2005/q05_easyprint.html#participants
  389. http://edge.org/3rd_culture/bios/petranek.html
  390. http://edge.org/q2005/q05_easyprint.html#participants
  391. http://edge.org/3rd_culture/bios/baroncohen.html
  392. http://edge.org/q2005/q05_easyprint.html#participants
  393. http://edge.org/3rd_culture/bios/standage.html
  394. http://edge.org/q2005/q05_easyprint.html#participants
  395. http://edge.org/3rd_culture/bios/lederman.html
  396. http://edge.org/q2005/q05_easyprint.html#participants
  397. http://edge.org/3rd_culture/bios/shermer.html
  398. http://edge.org/q2005/q05_easyprint.html#participants
  399. http://edge.org/3rd_culture/bios/epstein.html
  400. http://edge.org/q2005/q05_easyprint.html#participants
  401. http://edge.org/3rd_culture/bios/csik.html
  402. http://edge.org/q2005/q05_easyprint.html#participants
  403. http://edge.org/3rd_culture/bios/smolin.html
  404. http://edge.org/q2005/q05_easyprint.html#participants
  405. http://edge.org/3rd_culture/bios/pollack.html
  406. http://edge.org/q2005/q05_easyprint.html#participants
  407. http://edge.org/3rd_culture/bios/gelernter.html
  408. http://edge.org/q2005/q05_easyprint.html#participants
  409. http://edge.org/3rd_culture/bios/horgan.html
  410. http://edge.org/q2005/q05_easyprint.html#participants
  411. http://edge.org/3rd_culture/bios/skoyles.html
  412. http://edge.org/q2005/q05_easyprint.html#participants
  413. http://edge.org/3rd_culture/bios/metzinger.html
  414. http://edge.org/q2005/q05_easyprint.html#participants
  415. http://edge.org/3rd_culture/bios/schmetz.html
  416. http://edge.org/q2005/q05_easyprint.html#participants
  417. http://edge.org/3rd_culture/bios/dawkins.html
  418. http://edge.org/q2005/q05_easyprint.html#participants
  419. http://edge.org/3rd_culture/bios/pentland.html
  420. http://edge.org/q2005/q05_easyprint.html#participants
  421. http://edge.org/3rd_culture/bios/lanier.html
  422. http://edge.org/q2005/q05_easyprint.html#participants
  423. http://edge.org/3rd_culture/bios/barrow.html
  424. http://edge.org/q2005/q05_easyprint.html#participants
  425. http://edge.org/3rd_culture/bios/kurzweil.html
  426. http://edge.org/q2005/q05_easyprint.html#participants
  427. http://edge.org/3rd_culture/bios/kauffman.html
  428. http://edge.org/q2005/q05_easyprint.html#participants
  429. http://edge.org/3rd_culture/bios/marcus.html
  430. http://edge.org/q2005/q05_easyprint.html#participants
  431. http://edge.org/3rd_culture/bios/sabbagh.html
  432. http://edge.org/q2005/q05_easyprint.html#participants
  433. http://edge.org/3rd_culture/bios/atran.html
  434. http://edge.org/q2005/q05_easyprint.html#participants
  435. http://edge.org/3rd_culture/bios/bering.html
  436. http://edge.org/q2005/q05_easyprint.html#participants
  437. http://edge.org/3rd_culture/bios/pepperberg.html
  438. http://edge.org/q2005/q05_easyprint.html#participants
  439. http://edge.org/3rd_culture/bios/taleb.html
  440. http://edge.org/q2005/q05_easyprint.html#participants
  441. http://edge.org/3rd_culture/bios/feinberg.html
  442. http://edge.org/q2005/q05_easyprint.html#participants
  443. http://edge.org/3rd_culture/bios/krause.html
  444. http://edge.org/q2005/q05_easyprint.html#participants
  445. http://edge.org/3rd_culture/bios/spelke.html
  446. http://edge.org/q2005/q05_easyprint.html#participants
  447. http://edge.org/3rd_culture/bios/harriss.html
  448. http://edge.org/q2005/q05_easyprint.html#participants
  449. http://edge.org/3rd_culture/bios/margulis.html
  450. http://edge.org/q2005/q05_easyprint.html#participants
  451. http://edge.org/3rd_culture/bios/benford.html
  452. http://edge.org/q2005/q05_easyprint.html#participants
  453. http://edge.org/3rd_culture/bios/trehub.html
  454. http://edge.org/q2005/q05_easyprint.html#participants
  455. http://edge.org/3rd_culture/bios/harris.html
  456. http://edge.org/q2005/q05_easyprint.html#participants
  457. http://edge.org/3rd_culture/bios/sterling.html
  458. http://edge.org/q2005/q05_easyprint.html#participants
  459. http://edge.org/3rd_culture/bios/kay.html
  460. http://edge.org/q2005/q05_easyprint.html#participants
  461. http://edge.org/3rd_culture/bios/schank.html
  462. http://edge.org/q2005/q05_easyprint.html#participants
  463. http://edge.org/3rd_culture/bios/segre.html
  464. http://edge.org/q2005/q05_easyprint.html#participants
  465. http://edge.org/3rd_culture/bios/hut.html
  466. http://edge.org/q2005/q05_easyprint.html#participants
  467. http://edge.org/3rd_culture/bios/pickover.html
  468. http://edge.org/q2005/q05_easyprint.html#participants
  469. http://edge.org/3rd_culture/bios/blackmore.html
  470. http://edge.org/q2005/q05_easyprint.html#participants
  471. http://edge.org/3rd_culture/bios/devlin.html
  472. http://edge.org/q2005/q05_easyprint.html#participants
  473. http://edge.org/3rd_culture/bios/susskind.html
  474. http://edge.org/q2005/q05_easyprint.html#participants
  475. http://edge.org/3rd_culture/bios/sapolsky.html
  476. http://edge.org/q2005/q05_easyprint.html#participants
  477. http://edge.org/3rd_culture/bios/dysonf.html
  478. http://edge.org/q2005/q05_easyprint.html#participants
  479. http://edge.org/3rd_culture/bios/mcwhorter.html
  480. http://edge.org/q2005/q05_easyprint.html#participants
  481. http://edge.org/3rd_culture/bios/seligman.html
  482. http://edge.org/q2005/q05_easyprint.html#participants
  483. http://edge.org/3rd_culture/bios/gopnik.html
  484. http://edge.org/q2005/q05_easyprint.html#participants
  485. http://edge.org/3rd_culture/bios/pinker.html
  486. http://edge.org/q2005/q05_easyprint.html#participants
  487. http://edge.org/3rd_culture/bios/levin.html
  488. http://edge.org/q2005/q05_easyprint.html#participants
  489. http://edge.org/3rd_culture/bios/harari.html
  490. http://edge.org/q2005/q05_easyprint.html#participants
  491. http://edge.org/3rd_culture/bios/davies.html
  492. http://edge.org/q2005/q05_easyprint.html#participants
  493. http://edge.org/3rd_culture/bios/kelly.html
  494. http://edge.org/q2005/q05_easyprint.html#participants
  495. http://edge.org/3rd_culture/bios/anderson.html
  496. http://edge.org/q2005/q05_easyprint.html#participants
  497. http://edge.org/3rd_culture/bios/kosslyn.html
  498. http://edge.org/q2005/q05_easyprint.html#participants
  499. http://edge.org/3rd_culture/bios/LeDoux.html
  500. http://edge.org/q2005/q05_easyprint.html#participants
  501. http://edge.org/3rd_culture/bios/gershenfeld.html
  502. http://edge.org/q2005/q05_easyprint.html#participants
  503. http://edge.org/3rd_culture/bios/krauss.html
  504. http://edge.org/q2005/q05_easyprint.html#participants
  505. http://edge.org/3rd_culture/bios/calvin.html
  506. http://edge.org/q2005/q05_easyprint.html#participants
  507. http://edge.org/3rd_culture/bios/dennett.html
  508. http://edge.org/q2005/q05_easyprint.html#participants
  509. http://edge.org/3rd_culture/bios/dysong.html
  510. http://edge.org/q2005/q05_easyprint.html#participants
  511. http://edge.org/3rd_culture/bios/gilbert.html
  512. http://edge.org/q2005/q05_easyprint.html#participants
  513. http://edge.org/3rd_culture/bios/hauser.html
  514. http://edge.org/q2005/q05_easyprint.html#participants
  515. http://edge.org/3rd_culture/bios/humphrey.html
  516. http://edge.org/q2005/q05_easyprint.html#participants
  517. http://edge.org/3rd_culture/bios/gardner.html
  518. http://edge.org/q2005/q05_easyprint.html#participants
  519. http://edge.org/3rd_culture/bios/brockman.html
  520. http://edge.org/3rd_culture/bios/weinberger.html
  521. mailto:editor at edge.org


More information about the paleopsych mailing list