[Paleopsych] Science: What Don't We Know? (125th anniversary issue)

Premise Checker checker at panix.com
Sun Jul 3 15:02:18 UTC 2005


What Don't We Know? -- Kennedy and Norman 309 (5731): 75 -- Science
http://www.sciencemag.org/cgi/content/summary/sci;309/5731/75 et seq.

[All articles included. Read carefully. I'd like to know if there will be 
a different answer to the question, "How much can be boost i.q. and 
scholastic achievement." Actually, there was very little here touching 
upon the social sciences or social issues.]

Introduction to special issue
What Don't We Know?

    Donald Kennedy and Colin Norman

    At Science, we tend to get excited about new discoveries that lift the
    veil a little on how things work, from cells to the universe. That
    puts our focus firmly on what has been added to our stock of
    knowledge. For this anniversary issue, we decided to shift our frame
    of reference, to look instead at what we don't know: the scientific
    puzzles that are driving basic scientific research.

    We began by asking Science's Senior Editorial Board, our Board of
    Reviewing Editors, and our own editors and writers to suggest
    questions that point to critical knowledge gaps. The ground rules:
    Scientists should have a good shot at answering the questions over the
    next 25 years, or they should at least know how to go about answering
    them. We intended simply to choose 25 of these suggestions and turn
    them into a survey of the big questions facing science. But when a
    group of editors and writers sat down to select those big questions,
    we quickly realized that 25 simply wouldn't convey the grand sweep of
    cutting-edge research that lies behind the responses we received. So
    we have ended up with 125 questions, a fitting number for Science's
    125th anniversary.

    First, a note on what this special issue is not: It is not a survey of
    the big societal challenges that science can help solve, nor is it a
    forecast of what science might achieve. Think of it instead as a
    survey of our scientific ignorance, a broad swath of questions that
    scientists themselves are asking. As Tom Siegfried puts it in his
    introductory essay, they are "opportunities to be exploited."

    We selected 25 of the 125 questions to highlight based on several
    criteria: how fundamental they are, how broad-ranging, and whether
    their solutions will impact other scientific disciplines. Some have
    few immediate practical implications--the composition of the universe,
    for example. Others we chose because the answers will have enormous
    societal impact--whether an effective HIV vaccine is feasible, or how
    much the carbon dioxide we are pumping into the atmosphere will warm
    our planet, for example. Some, such as the nature of dark energy, have
    come to prominence only recently; others, such as the mechanism behind
    limb regeneration in amphibians, have intrigued scientists for more
    than a century. We listed the 25 highlighted questions in no special
    order, but we did group the 100 additional questions roughly by
    discipline.

    Our sister online publications are also devoting special issues to
    Science's 125th anniversary. The Science of Aging Knowledge
    Environment, SAGE KE ([32]www.sageke.org), is surveying several big
    questions confronting researchers on aging. The Signal Transduction
    Knowledge Environment, STKE ([33]www.stke.org), has selected classic
    Science articles that have had a high impact in the field of cell
    signaling and is highlighting them in an editorial guide. And
    Science's Next Wave ([34]www.nextwave.org) is looking at the careers
    of scientists grappling with some of the questions Science has
    identified.

    We are acutely aware that even 125 unknowns encompass only a partial
    answer to the question that heads this special section: What Don't We
    Know? So we invite you to participate in a special forum on Science's
    Web site ([35]www.sciencemag.org/sciext/eletters/125th), in which you
    can comment on our 125 questions or nominate topics we missed--and we
    apologize if they are the very questions you are working on.

--------------

How Hot Will the Greenhouse World Be?

    Richard A. Kerr

    Scientists know that the world has warmed lately, and they believe
    humankind is behind most of that warming. But how far might we push
    the planet in coming decades and centuries? That depends on just how
    sensitively the climate system--air, oceans, ice, land, and
    life--responds to the greenhouse gases we're pumping into the
    atmosphere. For a quarter-century, expert opinion was vague about
    climate sensitivity. Experts allowed that climate might be quite
    touchy, warming sharply when shoved by one climate driver or another,
    such as the carbon dioxide from fossil fuel burning, volcanic debris,
    or dimming of the sun. On the other hand, the same experts conceded
    that climate might be relatively unresponsive, warming only modestly
    despite a hard push toward the warm side.

    The problem with climate sensitivity is that you can't just go out and
    directly measure it. Sooner or later a climate model must enter the
    picture. Every model has its own sensitivity, but each is subject to
    all the uncertainties inherent in building a hugely simplified
    facsimile of the real-world climate system. As a result, climate
    scientists have long quoted the same vague range for sensitivity: A
    doubling of the greenhouse gas carbon dioxide, which is expected to
    occur this century, would eventually warm the world between a modest
    1.5°C and a whopping 4.5°C. This range--based on just two early
    climate models--first appeared in 1979 and has been quoted by every
    major climate assessment since.

    [37]Figure 1 A harbinger? Coffins being lined up during the
    record-breaking 2003 heat wave in Europe.

    Researchers are finally beginning to tighten up the range of possible
    sensitivities, at least at one end. For one, the sensitivities of the
    available models (5% to 95% confidence range) are now falling within
    the canonical range of 1.5°C to 4.5°C; some had gone considerably
    beyond the high end. And the first try at a new approach--running a
    single model while varying a number of model parameters such as cloud
    behavior--has produced a sensitivity range of 2.4°C to 5.4°Cwith a
    most probable value of 3.2°C.

    Models are only models, however. How much better if nature ran the
    experiment? Enter paleoclimatologists, who sort out how climate
    drivers such as greenhouse gases have varied naturally in the distant
    past and how the climate system of the time responded. Nature, of
    course, has never run the perfect analog for the coming greenhouse
    warming. And estimating how much carbon dioxide concentrations fell
    during the depths of the last ice age or how much sunlight debris from
    the eruption of Mount Pinatubo in the Philippines blocked will always
    have lingering uncertainties. But paleoclimate estimates of climate
    sensitivity generally fall in the canonical range, with a best
    estimate in the region of 3°C.

    The lower end at least of likely climate sensitivity does seem to be
    firming up; it's not likely below 1.5°C, say researchers. That would
    rule out the negligible warmings proposed by some greenhouse
    contrarians. But climate sensitivity calculations still put a fuzzy
    boundary on the high end. Studies drawing on the past century's
    observed climate change plus estimates of natural and anthropogenic
    climate drivers yield up to 30% probabilities of sensitivities above
    4.5°C, ranging as high as 9°C. The latest study that varies model
    parameters allows sensitivities up to 11°C, with the authors
    contending that they can't yet say what the chances of such extremes
    are. Others are pointing to times of extreme warmth in the geologic
    past that climate models fail to replicate, suggesting that there's a
    dangerous element to the climate system that the models do not yet
    contain.

    Climate researchers have their work cut out for them. They must inject
    a better understanding of clouds and aerosols--the biggest sources of
    uncertainty--into their modeling. Ten or 15 years ago, scientists said
    that would take 10 or 15 years; there's no sign of it happening
    anytime soon. They must increase the fidelity of models, a realistic
    goal given the continued acceleration of affordable computing power.
    And they must retrieve more and better records of past climate changes
    and their drivers. Meanwhile, unless a rapid shift away from fossil
    fuel use occurs worldwide, a doubling of carbon dioxide--and
    more--will be inevitable.
    _________________________________________________________________

What Can Replace Cheap Oil--and When?

    Richard A. Kerr and Robert F. Service

    The road from old to new energy sources can be bumpy, but the
    transitions have gone pretty smoothly in the past. After millennia of
    dependence on wood, society added coal and gravitydriven water to the
    energy mix. Industrialization took off. Oil arrived, and
    transportation by land and air soared, with hardly a worry about where
    the next log or lump of coal was coming from, or what the explosive
    growth in energy production might be doing to the world.

    Times have changed. The price of oil has been climbing, and ice is
    melting around both poles as the mercury in the global thermometer
    rises. Whether the next big energy transition will be as smooth as
    past ones will depend in large part on three sets of questions: When
    will world oil production peak? How sensitive is Earth's climate to
    the carbon dioxide we are pouring into the atmosphere by burning
    fossil fuels? And will alternative energy sources be available at
    reasonable costs? The answers rest on science and technology, but how
    society responds will be firmly in the realm of politics.

    There is little disagreement that the world will soon be running short
    of oil. The debate is over how soon. Global demand for oil has been
    rising at 1% or 2% each year, and we are now sucking almost 1000
    barrels of oil from the ground every second. Pessimists--mostly former
    oil company geologists--expect oil production to peak very soon. They
    point to American geologist M. King Hubbert's successful 1956
    prediction of the 1970 peak in U.S. production. Using the same method
    involving records of past production and discoveries, they predict a
    world oil peak by the end of the decade. Optimists--mostly resource
    economists--argue that oil production depends more on economics and
    politics than on how much happens to be in the ground. Technological
    innovation will intervene, and production will continue to rise, they
    say. Even so, midcentury is about as far as anyone is willing to push
    the peak. That's still "soon" considering that the United States, for
    one, will need to begin replacing oil's 40% contribution to its energy
    consumption by then. And as concerns about climate change intensify,
    the transition to nonfossil fuels could become even more urgent (see
    p. [38]100).

    If oil supplies do peak soon or climate concerns prompt a major shift
    away from fossil fuels, plenty of alternative energy supplies are
    waiting in the wings. The sun bathes Earth's surface with 86,000
    trillion watts, or terawatts, of energy at all times, about 6600 times
    the amount used by all humans on the planet each year. Wind, biomass,
    and nuclear power are also plentiful. And there is no shortage of
    opportunities for using energy more efficiently.

    Of course, alternative energy sources have their issues. Nuclear
    fission supporters have never found a noncontroversial solution for
    disposing of long-lived radioactive wastes, and concerns over
    liability and capital costs are scaring utility companies off.
    Renewable energy sources are diffuse, making it difficult and
    expensive to corral enough power from them at cheap prices. So far,
    wind is leading the way with a global installed capacity of more than
    40 billion watts, or gigawatts, providing electricity for about 4.5
    cents per kilowatt hour.

    That sounds good, but the scale of renewable energy is still very
    small when compared to fossil fuel use. In the United States,
    renewables account for just 6% of overall energy production. And, with
    global energy demand expected to grow from approximately 13 terawatts
    a year now to somewhere between 30 and 60 terawatts by the middle of
    this century, use of renewables will have to expand enormously to
    displace current sources and have a significant impact on the world's
    future energy needs.

    What needs to happen for that to take place? Using energy more
    efficiently is likely to be the sine qua non of energy planning--not
    least to buy time for efficiency improvements in alternative energy.
    The cost of solar electric power modules has already dropped two
    orders of magnitude over the last 30 years. And most experts figure
    the price needs to drop 100-fold again before solar energy systems
    will be widely adopted. Advances in nanotechnology may help by
    providing novel semiconductor systems to boost the efficiency of solar
    energy collectors and perhaps produce chemical fuels directly from
    sunlight, CO[2], and water.

    But whether these will come in time to avoid an energy crunch depends
    in part on how high a priority we give energy research and
    development. And it will require a global political consensus on what
    the science is telling us.
    _________________________________________________________________

Will Malthus Continue to Be Wrong?

    Erik Stokstad

    In 1798, a 32-year-old curate at a small parish church in Albury,
    England, published a sobering pamphlet entitled An Essay on the
    Principle of Population. As a grim rebuttal of the utopian
    philosophers of his day, Thomas Malthus argued that human populations
    will always tend to grow and, eventually, they will always be
    checked--either by foresight, such as birth control, or as a result of
    famine, war, or disease. Those speculations have inspired many a dire
    warning from environmentalists.

    Since Malthus's time, world population has risen sixfold to more than
    6 billion. Yet happily, apocalyptic collapses have mostly been
    prevented by the advent of cheap energy, the rise of science and
    technology, and the green revolution. Most demographers predict that
    by 2100, global population will level off at about 10 billion.

    The urgent question is whether current standards of living can be
    sustained while improving the plight of those in need. Consumption of
    resources--not just food but also water, fossil fuels, timber, and
    other essentials--has grown enormously in the developed world. In
    addition, humans have compounded the direct threats to those resources
    in many ways, including by changing climate (see p. [37]100),
    polluting land and water, and spreading invasive species.

    How can humans live sustainably on the planet and do so in a way that
    manages to preserve some biodiversity? Tackling that question involves
    a broad range of research for natural and social scientists. It's
    abundantly clear, for example, that humans are degrading many
    ecosystems and hindering their ability to provide clean water and
    other "goods and services" (Science, 1 April, p. [38]41). But exactly
    how bad is the situation? Researchers need better information on the
    status and trends of wetlands, forests, and other areas. To set
    priorities, they'd also like a better understanding of what makes
    ecosystems more resistant or vulnerable and whether stressed
    ecosystems, such as marine fisheries, have a threshold at which they
    won't recover.

    [39]Figure 1 Out of balance. Sustaining a growing world population is
    threatened by inefficient consumption of resources--and by poverty.

    Agronomists face the task of feeding 4 billion more mouths. Yields may
    be maxing out in the developed world, but much can still be done in
    the developing world, particularly sub-Saharan Africa, which
    desperately needs more nitrogen. Although agricultural biotechnology
    clearly has potential to boost yields and lessen the environmental
    impact of farming, it has its own risks, and winning over skeptics has
    proven difficult.

    There's no shortage of work for social scientists either. Perverse
    subsidies that encourage overuse of resources--tax loopholes for
    luxury Hummers and other inefficient vehicles, for example--remain a
    chronic problem. A new area of activity is the attempt to place values
    on ecosystems' services, so that the price of clear-cut lumber, for
    instance, covers the loss of a forest's ability to provide clean
    water. Incorporating those "externalities" into pricing is a daunting
    challenge that demands much more knowledge of ecosystems. In addition,
    economic decisions often consider only net present value and discount
    the future value of resources--soil erosion, slash-and-burn
    agriculture, and the mining of groundwater for cities and farming are
    prime examples. All this complicates the process of transforming
    industries so that they provide jobs, goods, and services while
    damaging the environment less.

    Researchers must also grapple with the changing demographics of
    housing and how it will impact human well-being: In the next 35 to 50
    years, the number of people living in cities will double. Much of the
    growth will likely happen in the developing world in cities that
    currently have 30,000 to 3 million residents. Coping with that huge
    urban influx will require everything from energy efficient ways to
    make concrete to simple ways to purify drinking water.

    And in an age of global television and relentless advertising, what
    will happen to patterns of consumption? The world clearly can't
    support 10 billion people living like Americans do today. Whether
    science--both the natural and social sciences--and technology can
    crank up efficiency and solve the problems we've created is perhaps
    the most critical question the world faces. Mustering the political
    will to make hard choices is, however, likely to be an even bigger
    challenge.
    _________________________________________________________________

In Praise of Hard Questions

    Tom Siegfried[37]*

    Great cases, as U.S. Supreme Court Justice Oliver Wendell Holmes
    suggested a century ago, may make bad law. But great questions often
    make very good science.

    Unsolved mysteries provide science with motivation and direction. Gaps
    in the road to scientific knowledge are not potholes to be avoided,
    but opportunities to be exploited.

    "Fundamental questions are guideposts; they stimulate people," says
    2004 Nobel physics laureate David Gross. "One of the most creative
    qualities a research scientist can have is the ability to ask the
    right questions."

    Science's greatest advances occur on the frontiers, at the interface
    between ignorance and knowledge, where the most profound questions are
    posed. There's no better way to assess the current condition of
    science than listing the questions that science cannot answer.
    "Science," Gross declares, "is shaped by ignorance."

    There have been times, though, when some believed that science had
    paved over all the gaps, ending the age of ignorance. When Science was
    born, in 1880, James Clerk Maxwell had died just the year before,
    after successfully explaining light, electricity, magnetism, and heat.
    Along with gravity, which Newton had mastered 2 centuries earlier,
    physics was, to myopic eyes, essentially finished. Darwin, meanwhile,
    had established the guiding principle of biology, and Mendeleyev's
    periodic table--only a decade old--allowed chemistry to publish its
    foundations on a poster board. Maxwell himself mentioned that many
    physicists believed the trend in their field was merely to measure the
    values of physical constants "to another place of decimals."

    Nevertheless, great questions raged. Savants of science debated not
    only the power of natural selection, but also the origin of the solar
    system, the age and internal structure of Earth, and the prospect of a
    plurality of worlds populating the cosmos.

    In fact, at the time of Maxwell's death, his theory of electromagnetic
    fields was not yet widely accepted or even well known; experts still
    argued about whether electricity and magnetism propagated their
    effects via "action at a distance," as gravity (supposedly) did, or by
    Michael Faraday's "lines of force" (incorporated by Maxwell into his
    fields). Lurking behind that dispute was the deeper issue of whether
    gravity could be unified with electromagnetism (Maxwell thought not),
    a question that remains one of the greatest in science today, in a
    somewhat more complicated form.

    Maxwell knew full well that his accomplishments left questions
    unanswered. His calculations regarding the internal motion of
    molecules did not agree with measurements of specific heats, for
    instance. "Something essential to the complete state of the physical
    theory of molecular encounters must have hitherto escaped us," he
    commented.

    When Science turned 20--at the 19th century's end--Maxwell's mentor
    William Thomson (Lord Kelvin) articulated the two grand gaps in
    knowledge of the day. (He called them "clouds" hanging over
    physicists' heads.) One was the mystery of specific heats that Maxwell
    had identified; the other was the failure to detect the ether, a
    medium seemingly required by Maxwell's electromagnetic waves.

    Filling those two gaps in knowledge required the 20th century's
    quantum and relativity revolutions. The ignorance enveloped in
    Kelvin's clouds was the impetus for science's revitalization.

    Throughout the last century, pursuing answers to great questions
    reshaped human understanding of the physical and living world. Debates
    over the plurality of worlds assumed galactic proportions,
    specifically addressing whether Earth's home galaxy, the Milky Way,
    was only one of many such conglomerations of stars. That issue was
    soon resolved in favor of the Milky Way's nonexclusive status, in much
    the same manner that Earth itself had been demoted from its central
    role in the cosmos by Copernicus centuries before.

    But the existence of galaxies outside our own posed another question,
    about the apparent motions of those galaxies away from one another.
    That issue echoed a curious report in Science's first issue about a
    set of stars forming a triangular pattern, with a double star at the
    apex and two others forming the base. Precise observations showed the
    stars to be moving apart, making the triangle bigger but maintaining
    its form.

    "It seems probable that all these stars are slowly moving away from
    one common point, so that many years back they were all very much
    closer to one another," Science reported, as though the four stars had
    all begun their journey from the same place. Understanding such motion
    was a question "of the highest interest."

    A half a century later, Edwin Hubble enlarged that question from one
    about stellar motion to the origin and history of the universe itself.
    He showed that galaxies also appeared to be receding from a common
    starting point, evidence that the universe was expanding. With
    Hubble's discovery, cosmology's grand questions began to morph from
    the philosophical to the empirical. And with the discovery of the
    cosmic microwave background in the 1960s, the big bang theory of the
    universe's birth assumed the starring role on the cosmological
    stage--providing cosmologists with one big answer and many new
    questions.

    By Science's centennial, a quarter-century ago, many gaps still
    remained in knowledge of the cosmos; some of them have since been
    filled, while others linger. At that time debate continued over the
    existence of planets around faraway stars, a question now settled with
    the discovery of dozens of planets in the solar system's galactic
    neighborhood. But now a bigger question looms beyond the scope of
    planets or even galaxies: the prospect of multiple universes, cousins
    to the bubble of time and space that humans occupy.

    And not only may the human universe not be alone (defying the old
    definition of universe), humans may not be alone in their own space,
    either. The possible existence of life elsewhere in the cosmos remains
    as great a gap as any in present-day knowledge. And it is enmeshed
    with the equally deep mystery of life's origin on Earth.

    Life, of course, inspires many deep questions, from the prospects for
    immortality to the prognosis for eliminating disease. Scientists
    continue to wonder whether they will ever be able to create new life
    forms from scratch, or at least simulate life's self-assembling
    capabilities. Biologists, physicists, mathematicians, and computer
    scientists have begun cooperating on a sophisticated "systems biology"
    aimed at understanding how the countless molecular interactions at the
    heart of life fit together in the workings of cells, organs, and whole
    animals. And if successful, the systems approach should help doctors
    tailor treatments to individual variations in DNA, permitting
    personalized medicine that deters disease without inflicting side
    effects. Before Science turns 150, revamped versions of modern
    medicine may make it possible for humans to live that long, too.

    As Science and science age, knowledge and ignorance have coevolved,
    and the nature of the great questions sometimes changes. Old questions
    about the age and structure of the Earth, for instance, have given way
    to issues concerning the planet's capacity to support a growing and
    aging population.

    Some great questions get bigger over time, encompassing an
    ever-expanding universe, or become more profound, such as the quest to
    understand consciousness. On the other hand, many deep questions drive
    science to smaller scales, more minute than the realm of atoms and
    molecules, or to a greater depth of detail underlying broad-brush
    answers to past big questions. In 1880, some scientists remained
    unconvinced by Maxwell's evidence for atoms. Today, the analogous
    debate focuses on superstrings as the ultimate bits of matter, on a
    scale a trillion trillion times smaller. Old arguments over evolution
    and natural selection have descended to debates on the dynamics of
    speciation, or how particular behaviors, such as altruistic
    cooperation, have emerged from the laws of individual competition.

    Great questions themselves evolve, of course, because their answers
    spawn new and better questions in turn. The solutions to Kelvin's
    clouds--relativity and quantum physics--generated many of the
    mysteries on today's list, from the composition of the cosmos to the
    prospect for quantum computers.

    Ultimately, great questions like these both define the state of
    scientific knowledge and drive the engines of scientific discovery.
    Where ignorance and knowledge converge, where the known confronts the
    unknown, is where scientific progress is most dramatically made.
    "Thoroughly conscious ignorance," wrote Maxwell, "is the prelude to
    every real advance in science."

    So when science runs out of questions, it would seem, science will
    come to an end. But there's no real danger of that. The highway from
    ignorance to knowledge runs both ways: As knowledge accumulates,
    diminishing the ignorance of the past, new questions arise, expanding
    the areas of ignorance to explore.

    Maxwell knew that even an era of precision measurements is not a sign
    of science's end but preparation for the opening of new frontiers. In
    every branch of science, Maxwell declared, "the labor of careful
    measurement has been rewarded by the discovery of new fields of
    research and by the development of new scientific ideas."

    If science's progress seems to slow, it's because its questions get
    increasingly difficult, not because there will be no new questions
    left to answer.

    Fortunately, hard questions also can make great science, just as
    Justice Holmes noted that hard cases, like great cases, made bad law.
    Bad law resulted, he said, because emotional concerns about celebrated
    cases exerted pressures that distorted well-established legal
    principles. And that's why the situation in science is the opposite of
    that in law. The pressures of the great, hard questions bend and even
    break well-established principles, which is what makes science forever
    self-renewing--and which is what demolishes the nonsensical notion
    that science's job will ever be done.
                  __________________________________________

    Tom Siegfried is the author of Strange Matters and The Bit and the
    Pendulum.

    _________________________________________________________________

What Is the Universe Made Of?

    Charles Seife

    Every once in a while, cosmologists are dragged, kicking and
    screaming, into a universe much more unsettling than they had any
    reason to expect. In the 1500s and 1600s, Copernicus, Kepler, and
    Newton showed that Earth is just one of many planets orbiting one of
    many stars, destroying the comfortable Medieval notion of a closed and
    tiny cosmos. In the 1920s, Edwin Hubble showed that our universe is
    constantly expanding and evolving, a finding that eventually shattered
    the idea that the universe is unchanging and eternal. And in the past
    few decades, cosmologists have discovered that the ordinary matter
    that makes up stars and galaxies and people is less than 5% of
    everything there is. Grappling with this new understanding of the
    cosmos, scientists face one overriding question: What is the universe
    made of?

    This question arises from years of progressively stranger
    observations. In the 1960s, astronomers discovered that galaxies spun
    around too fast for the collective pull of the stars' gravity to keep
    them from flying apart. Something unseen appears to be keeping the
    stars from flinging themselves away from the center: unilluminated
    matter that exerts extra gravitational force. This is dark matter.

    Over the years, scientists have spotted some of this dark matter in
    space; they have seen ghostly clouds of gas with x-ray telescopes,
    watched the twinkle of distant stars as invisible clumps of matter
    pass in front of them, and measured the distortion of space and time
    caused by invisible mass in galaxies. And thanks to observations of
    the abundances of elements in primordial gas clouds, physicists have
    concluded that only 10% of ordinary matter is visible to telescopes.

    [37]Figure 1 In the dark. Dark matter holds galaxies together;
    supernovae measurements point to a mysterious dark energy.

    But even multiplying all the visible "ordinary" matter by 10 doesn't
    come close to accounting for how the universe is structured. When
    astronomers look up in the heavens with powerful telescopes, they see
    a lumpy cosmos. Galaxies don't dot the skies uniformly; they cluster
    together in thin tendrils and filaments that twine among vast voids.
    Just as there isn't enough visible matter to keep galaxies spinning at
    the right speed, there isn't enough ordinary matter to account for
    this lumpiness. Cosmologists now conclude that the gravitational
    forces exerted by another form of dark matter, made of an
    as-yet-undiscovered type of particle, must be sculpting these vast
    cosmic structures. They estimate that this exotic dark matter makes up
    about 25% of the stuff in the universe--five times as much as ordinary
    matter.

    But even this mysterious entity pales by comparison to another
    mystery: dark energy. In the late 1990s, scientists examining distant
    supernovae discovered that the universe is expanding faster and
    faster, instead of slowing down as the laws of physics would imply. Is
    there some sort of antigravity force blowing the universe up?

    All signs point to yes. Independent measurements of a variety of
    phenomena--cosmic background radiation, element abundances, galaxy
    clustering, gravitational lensing, gas cloud properties--all converge
    on a consistent, but bizarre, picture of the cosmos. Ordinary matter
    and exotic, unknown particles together make up only about 30% of the
    stuff in the universe; the rest is this mysterious anti-gravity force
    known as dark energy.

    This means that figuring out what the universe is made of will require
    answers to three increasingly difficult sets of questions. What is
    ordinary dark matter made of, and where does it reside? Astrophysical
    observations, such as those that measure the bending of light by
    massive objects in space, are already yielding the answer. What is
    exotic dark matter? Scientists have some ideas, and with luck, a
    dark-matter trap buried deep underground or a high-energy atom smasher
    will discover a new type of particle within the next decade. And
    finally, what is dark energy? This question, which wouldn't even have
    been asked a decade ago, seems to transcend known physics more than
    any other phenomenon yet observed. Ever-better measurements of
    supernovae and cosmic background radiation as well as planned
    observations of gravitational lensing will yield information about
    dark energy's "equation of state"--essentially a measure of how
    squishy the substance is. But at the moment, the nature of dark energy
    is arguably the murkiest question in physics--and the one that, when
    answered, may shed the most light.

    _________________________________________________________________

So Much More to Know ...

    From the nature of the cosmos to the nature of societies, the
    following 100 questions span the sciences. Some are pieces of
    questions discussed above; others are big questions in their own
    right. Some will drive scientific inquiry for the next century; others
    may soon be answered. Many will undoubtedly spawn new questions.

    Is ours the only universe?
    A number of quantum theorists and cosmologists are trying to figure
    out whether our universe is part of a bigger "multiverse." But others
    suspect that this hard-to-test idea may be a question for
    philosophers.

    What drove cosmic inflation?
    In the first moments after the big bang, the universe blew up at an
    incredible rate. But what did the blowing? Measurements of the cosmic
    microwave background and other astrophysical observations are
    narrowing the possibilities.

    When and how did the first stars and galaxies form?
    The broad brush strokes are visible, but the fine details aren't. Data
    from satellites and ground-based telescopes may soon help pinpoint,
    among other particulars, when the first generation of stars burned off
    the hydrogen "fog" that filled the universe.

    Where do ultrahigh-energy cosmic rays come from?
    Above a certain energy, cosmic rays don't travel very far before being
    destroyed. So why are cosmic-ray hunters spotting such rays with no
    obvious source within our galaxy?

    What powers quasars?
    The mightiest energy fountains in the universe probably get their
    power from matter plunging into whirling supermassive black holes. But
    the details of what drives their jets remain anybody's guess.

    What is the nature of black holes?
    Relativistic mass crammed into a quantum-sized object? It's a recipe
    for disaster--and scientists are still trying to figure out the
    ingredients.

    Why is there more matter than antimatter?
    To a particle physicist, matter and antimatter are almost the same.
    Some subtle difference must explain why matter is common and
    antimatter rare.

    Does the proton decay?
    In a theory of everything, quarks (which make up protons) should
    somehow be convertible to leptons (such as electrons)--so catching a
    proton decaying into something else might reveal new laws of particle
    physics.

    What is the nature of gravity?
    It clashes with quantum theory. It doesn't fit in the Standard Model.
    Nobody has spotted the particle that is responsible for it. Newton's
    apple contain ned a whole can of worms.

    Why is time different from other dimensions?
    It took millennia for scientists to realize that time is a dimension,
    like the three spatial dimensions, and that time and space are
    inextricably linked. The equations make sense, but they don't satisfy
    those who ask why we perceive a "now" or why time seems to flow the
    way it does.

    Are there smaller building blocks than quarks?
    Atoms were "uncuttable." Then scientists discovered protons, neutrons,
    and other subatomic particles--which were, in turn, shown to be made
    up of quarks and gluons. Is there something more fundamental still?

    Are neutrinos their own antiparticles?
    Nobody knows this basic fact about neutrinos, although a number of
    underground experiments are under way. Answering this question may be
    a crucial step to understanding the origin of matter in the universe.

    Is there a unified theory explaining all correlated electron systems?
    High-temperature superconductors and materials with giant and colossal
    magnetoresistance are all governed by the collective rather than
    individual behavior of electrons. There is currently no common
    framework for understanding them.

    What is the most powerful laser researchers can build?
    Theorists say an intense enough laser field would rip photons into
    electron-positron pairs, dousing the beam. But no one knows whether
    it's possible to reach that point.

    Can researchers make a perfect optical lens?
    They've done it with microwaves but never with visible light.

    Is it possible to create magnetic semiconductors that work at room
    temperature?
    Such devices have been demonstrated at low temperatures but not yet in
    a range warm enough for spintronics applications.

    What is the pairing mechanism behind high-temperature
    superconductivity?
    Electrons in superconductors surf together in pairs. After 2 decades
    of intense study, no one knows what holds them together in the
    complex, high-temperature materials.

    Can we develop a general theory of the dynamics of turbulent flows and
    the motion of granular materials?
    So far, such "nonequilibrium systems" defy the tool kit of statistical
    mechanics, and the failure leaves a gaping hole in physics.

    Are there stable high-atomic-number elements?
    A superheavy element with 184 neutrons and 114 protons should be
    relatively stable, if physicists can create it.

    Is superfluidity possible in a solid? If so, how?
    Despite hints in solid helium, nobody is sure whether a crystalline
    material can flow without resistance. If new types of experiments show
    that such outlandish behavior is possible, theorists would have to
    explain how.

    What is the structure of water?
    Researchers continue to tussle over how many bonds each H[2]O molecule
    makes with its nearest neighbors.

    What is the nature of the glassy state?
    Molecules in a glass are arranged much like those in liquids but are
    more tightly packed. Where and why does liquid end and glass begin?

    Are there limits to rational chemical synthesis?
    The larger synthetic molecules get, the harder it is to control their
    shapes and make enough copies of them to be useful. Chemists will need
    new tools to keep their creations growing.

    What is the ultimate efficiency of photovoltaic cells?
    Conventional solar cells top out at converting 32% of the energy in
    sunlight to electricity. Can researchers break through the barrier?

    Will fusion always be the energy source of the future?
    It's been 35 years away for about 50 years, and unless the
    international community gets its act together, it'll be 35 years away
    for many decades to come.

    What drives the solar magnetic cycle?
    Scientists believe differing rates of rotation from place to place on
    the sun underlie its 22-year sunspot cycle. They just can't make it
    work in their simulations. Either a detail is askew, or it's back to
    the drawing board.

    How do planets form?
    How bits of dust and ice and gobs of gas came together to form the
    planets without the sun devouring them all is still unclear. Planetary
    systems around other stars should provide clues.

    What causes ice ages?
    Something about the way the planet tilts, wobbles, and careens around
    the sun presumably brings on ice ages every 100,000 years or so, but
    reams of climate records haven't explained exactly how.

    What causes reversals in Earth's magnetic field?
    Computer models and laboratory experiments are generating new data on
    how Earth's magnetic poles might flip-flop. The trick will be matching
    simulations to enough aspects of the magnetic field beyond the
    inaccessible core to build a convincing case.

    Are there earthquake precursors that can lead to useful predictions?
    Prospects for finding signs of an imminent quake have been waning
    since the 1970s. Understanding faults will progress, but routine
    prediction would require an as-yet-unimagined breakthrough.

    Is there--or was there--life elsewhere in the solar system?
    The search for life--past or present--on other planetary bodies now
    drives NASA's planetary exploration program, which focuses on Mars,
    where water abounded when life might have first arisen.

    What is the origin of homochirality in nature?
    Most biomolecules can be synthesized in mirror-image shapes. Yet in
    organisms, amino acids are always left-handed, and sugars are always
    right-handed. The origins of this preference remain a mystery.

    Can we predict how proteins will fold?
    Out of a near infinitude of possible ways to fold, a protein picks one
    in just tens of microseconds. The same task takes 30 years of computer
    time.

    How many proteins are there in humans?
    It has been hard enough counting genes. Proteins can be spliced in
    different ways and decorated with numerous functional groups, all of
    which makes counting their numbers impossible for now.

    How do proteins find their partners?
    Protein-protein interactions are at the heart of life. To understand
    how partners come together in precise orientations in seconds,
    researchers need to know more about the cell's biochemistry and
    structural organization.

    How many forms of cell death are there?
    In the 1970s, apoptosis was finally recognized as distinct from
    necrosis. Some biologists now argue that the cell death story is even
    more complicated. Identifying new ways cells die could lead to better
    treatments for cancer and degenerative diseases.

    What keeps intracellular traffic running smoothly?
    Membranes inside cells transport key nutrients around, and through,
    various cell compartments without sticking to each other or losing
    their way. Insights into how membranes stay on track could help
    conquer diseases, such as cystic fibrosis.

    What enables cellular components to copy themselves independent of
    DNA?
    Centrosomes, which help pull apart paired chromosomes, and other
    organelles replicate on their own time, without DNA's guidance. This
    independence still defies explanation.

    What roles do different forms of RNA play in genome function?
    RNA is turning out to play a dizzying assortment of roles, from
    potentially passing genetic information to offspring to muting gene
    expression. Scientists are scrambling to decipher this versatile
    molecule.

    What role do telomeres and centromeres play in genome function?
    These chromosome features will remain mysteries until new technologies
    can sequence them.

    Why are some genomes really big and others quite compact?
    The puffer fish genome is 400 million bases; one lungfish's is 133
    billion bases long. Repetitive and duplicated DNA don't explain why
    this and other size differences exist.

    What is all that "junk" doing in our genomes?
    DNA between genes is proving important for genome function and the
    evolution of new species. Comparative sequencing, microarray studies,
    and lab work are helping genomicists find a multitude of genetic gems
    amid the junk.

    How much will new technologies lower the cost of sequencing?
    New tools and conceptual breakthroughs are driving the cost of DNA
    sequencing down by orders of magnitude. The reductions are enabling
    research from personalized medicine to evolutionary biology to thrive.

    How do organs and whole organisms know when to stop growing?
    A person's right and left legs almost always end up the same length,
    and the hearts of mice and elephants each fit the proper rib cage. How
    genes set limits on cell size and number continues to mystify.

    How can genome changes other than mutations be inherited?
    Researchers are finding ever more examples of this process, called
    epigenetics, but they can't explain what causes and preserves the
    changes.

    How is asymmetry determined in the embryo?
    Whirling cilia help an embryo tell its left from its right, but
    scientists are still looking for the first factors that give a
    relatively uniform ball of cells a head, tail, front, and back.

    How do limbs, fins, and faces develop and evolve?
    The genes that determine the length of a nose or the breadth of a wing
    are subject to natural and sexual selection. Understanding how
    selection works could lead to new ideas about the mechanics of
    evolution with respect to development.

    What triggers puberty?
    Nutrition--including that received in utero--seems to help set this
    mysterious biological clock, but no one knows exactly what forces
    childhood to end.

    Are stem cells at the heart of all cancers?
    The most aggressive cancer cells look a lot like stem cells. If
    cancers are caused by stem cells gone awry, studies of a cell's
    "stemness" may lead to tools that could catch tumors sooner and
    destroy them more effectively.

    Is cancer susceptible to immune control?
    Although our immune responses can suppress tumor growth, tumor cells
    can combat those responses with counter-measures. This defense can
    stymie researchers hoping to develop immune therapies against cancer.

    Can cancers be controlled rather than cured?
    Drugs that cut off a tumor's fuel supplies--say, by stopping
    blood-vessel growth--can safely check or even reverse tumor growth.
    But how long the drugs remain effective is still unknown.

    Is inflammation a major factor in all chronic diseases?
    It's a driver of arthritis, but cancer and heart disease? More and
    more, the answer seems to be yes, and the question remains why and
    how.

    How do prion diseases work?
    Even if one accepts that prions are just misfolded proteins, many
    mysteries remain. How can they go from the gut to the brain, and how
    do they kill cells once there, for example.

    How much do vertebrates depend on the innate immune system to fight
    infection?
    This system predates the vertebrate adaptive immune response. Its
    relative importance is unclear, but immunologists are working to find
    out.

    Does immunologic memory require chronic exposure to antigens?
    Yes, say a few prominent thinkers, but experiments with mice now
    challenge the theory. Putting the debate to rest would require proving
    that something is not there, so the question likely will not go away.

    Why doesn't a pregnant woman reject her fetus?
    Recent evidence suggests that the mother's immune system doesn't
    "realize" that the fetus is foreign even though it gets half its genes
    from the father. Yet just as Nobelist Peter Medawar said when he first
    raised this question in 1952, "the verdict has yet to be returned."

    What synchronizes an organism's circadian clocks?
    Circadian clock genes have popped up in all types of creatures and in
    many parts of the body. Now the challenge is figuring out how all the
    gears fit together and what keeps the clocks set to the same time.

    How do migrating organisms find their way?
    Birds, butterflies, and whales make annual journeys of thousands of
    kilometers. They rely on cues such as stars and magnetic fields, but
    the details remain unclear.

    Why do we sleep?
    A sound slumber may refresh muscles and organs or keep animals safe
    from dangers lurking in the dark. But the real secret of sleep
    probably resides in the brain, which is anything but still while we're
    snoring away.

    Why do we dream?
    Freud thought dreaming provides an outlet for our unconscious desires.
    Now, neuroscientists suspect that brain activity during REM
    sleep--when dreams occur--is crucial for learning. Is the experience
    of dreaming just a side effect?

    Why are there critical periods for language learning?
    Monitoring brain activity in young children--including infants--may
    shed light on why children pick up languages with ease while adults
    often struggle to learn train station basics in a foreign tongue.

    Do pheromones influence human behavior?
    Many animals use airborne chemicals to communicate, particularly when
    mating. Controversial studies have hinted that humans too use
    pheromones. Identifying them will be key to assessing their sway on
    our social lives.

    How do general anesthetics work?
    Scientists are chipping away at the drugs' effects on individual
    neurons, but understanding how they render us unconscious will be a
    tougher nut to crack.

    What causes schizophrenia?
    Researchers are trying to track down genes involved in this disorder.
    Clues may also come from research on traits schizophrenics share with
    normal people.

    What causes autism?
    Many genes probably contribute to this baffling disorder, as well as
    unknown environmental factors. A biomarker for early diagnosis would
    help improve existing therapy, but a cure is a distant hope.

    To what extent can we stave off Alzheimer's?
    A 5- to 10-year delay in this late-onset disease would improve old age
    for millions. Researchers are determining whether treatments with
    hormones or antioxidants, or mental and physical exercise, will help.

    What is the biological basis of addiction?
    Addiction involves the disruption of the brain's reward circuitry. But
    personality traits such as impulsivity and sensation-seeking also play
    a part in this complex behavior.

    Is morality hardwired into the brain?
    That question has long puzzled philosophers; now some neuroscientists
    think brain imaging will reveal circuits involved in reasoning.

    What are the limits of learning by machines?
    Computers can already beat the world's best chess players, and they
    have a wealth of information on the Web to draw on. But abstract
    reasoning is still beyond any machine.

    How much of personality is genetic?
    Aspects of personality are influenced by genes; environment modifies
    the genetic effects. The relative contributions remain under debate.

    What is the biological root of sexual orientation?
    Much of the "environmental" contribution to homosexuality may occur
    before birth in the form of prenatal hormones, so answering this
    question will require more than just the hunt for "gay genes."

    Will there ever be a tree of life that systematists can agree on?
    Despite better morphological, molecular, and statistical methods,
    researchers' trees don't agree. Expect greater, but not complete,
    consensus.

    How many species are there on Earth?
    Count all the stars in the sky? Impossible. Count all the species on
    Earth? Ditto. But the biodiversity crisis demands that we try.

    What is a species?
    A "simple" concept that's been muddied by evolutionary data; a clear
    definition may be a long time in coming.

    Why does lateral transfer occur in so many species and how?
    Once considered rare, gene swapping, particularly among microbes, is
    proving quite common. But why and how genes are so mobile--and the
    effect on fitness--remains to be determined.

    Who was LUCA (the last universal common ancestor)?
    Ideas about the origin of the 1.5-billion-year-old "mother" of all
    complex organisms abound. The continued discovery of primitive
    microbes, along with comparative genomics, should help resolve life's
    deep past.

    How did flowers evolve?
    Darwin called this question an "abominable mystery." Flowers arose in
    the cycads and conifers, but the details of their evolution remain
    obscure.

    How do plants make cell walls?
    Cellulose and pectin walls surround cells, keeping water in and
    supporting tall trees. The biochemistry holds the secrets to turning
    its biomass into fuel.

    How is plant growth controlled?
    Redwoods grow to be hundreds of meters tall, Arctic willows barely 10
    centimeters. Understanding the difference could lead to
    higher-yielding crops.

    Why aren't all plants immune to all diseases?
    Plants can mount a general immune response, but they also maintain
    molecular snipers that take out specific pathogens. Plant pathologists
    are asking why different species, even closely related ones, have
    different sets of defenders. The answer could result in hardier crops.

    What is the basis of variation in stress tolerance in plants?
    We need crops that better withstand drought, cold, and other stresses.
    But there are so many genes involved, in complex interactions, that no
    one has yet figured out which ones work how.

    What caused mass extinctions?
    A huge impact did in the dinosaurs, but the search for other
    catastrophic triggers of extinction has had no luck so far. If more
    subtle or stealthy culprits are to blame, they will take considerably
    longer to find.

    Can we prevent extinction?
    Finding cost-effective and politically feasible ways to save many
    endangered species requires creative thinking.

    Why were some dinosaurs so large?
    Dinosaurs reached almost unimaginable sizes, some in less than 20
    years. But how did the long-necked sauropods, for instance, eat enough
    to pack on up to 100 tons without denuding their world?

    How will ecosystems respond to global warming?
    To anticipate the effects of the intensifying greenhouse, climate
    modelers will have to focus on regional changes and ecologists on the
    right combination of environmental changes.

    How many kinds of humans coexisted in the recent past, and how did
    they relate?
    The new dwarf human species fossil from Indonesia suggests that at
    least four kinds of humans thrived in the past 100,000 years. Better
    dates and additional material will help confirm or revise this
    picture.

    What gave rise to modern human behavior?
    Did Homo sapiens acquire abstract thought, language, and art gradually
    or in a cultural "big bang," which in Europe occurred about 40,000
    years ago? Data from Africa, where our species arose, may hold the key
    to the answer.

    What are the roots of human culture?
    No animal comes close to having humans' ability to build on previous
    discoveries and pass the improvements on. What determines those
    differences could help us understand how human culture evolved.

    What are the evolutionary roots of language and music?
    Neuroscientists exploring how we speak and make music are just
    beginning to find clues as to how these prized abilities arose.

    What are human races, and how did they develop?
    Anthropologists have long argued that race lacks biological reality.
    But our genetic makeup does vary with geographic origin and as such
    raises political and ethical as well as scientific questions.

    Why do some countries grow and others stagnate?
    From Norway to Nigeria, living standards across countries vary
    enormously, and they're not becoming more equal.

    What impact do large government deficits have on a country's interest
    rates and economic growth rate?
    The United States could provide a test case.

    Are political and economic freedom closely tied?
    China may provide one answer.

    Why has poverty increased and life expectancy declined in sub-Saharan
    Africa?
    Almost all efforts to reduce poverty in sub-Saharan Africa have
    failed. Figuring out what will work is crucial to alleviating massive
    human suffering.

    The following six mathematics questions are drawn from a list of seven
    outstanding problems selected by the Clay Mathematics Institute. (The
    seventh problem is discussed on p. 96.) For more details, go to
    www.claymath.org/millennium.

    Is there a simple test for determining whether an elliptic curve has
    an infinite number of rational solutions?
    Equations of the form y^2 = x^3 [plus.gif] ax [plus.gif] b are
    powerful mathematical tools. The Birch and Swinnerton-Dyer conjecture
    tells how to determine how many solutions they have in the realm of
    rational numbers--information that could solve a host of problems, if
    the conjecture is true.

    Can a Hodge cycle be written as a sum of algebraic cycles?
    Two useful mathematical structures arose independently in geometry and
    in abstract algebra. The Hodge conjecture posits a surprising link
    between them, but the bridge remains to be built.

    Will mathematicians unleash the power of the Navier-Stokes equations?
    First written down in the 1840s, the equations hold the keys to
    understanding both smooth and turbulent flow. To harness them, though,
    theorists must find out exactly when they work and under what
    conditions they break down.

    Does Poincaré's test identify spheres in four-dimensional space?
    You can tie a string around a doughnut, but it will slide right off a
    sphere. The mathematical principle behind that observation can
    reliably spot every spherelike object in 3D space. Henri Poincaré
    conjectured that it should also work in the next dimension up, but no
    one has proved it yet.

    Do mathematically interesting zero-value solutions of the Riemann zeta
    function all have the form a [plus.gif] bi?
    Don't sweat the details. Since the mid-19th century, the "Riemann
    hypothesis" has been the monster catfish in mathematicians' pond. If
    true, it will give them a wealth of information about the distribution
    of prime numbers and other long-standing mysteries.

    Does the Standard Model of particle physics rest on solid mathematical
    foundations?
    For almost 50 years, the model has rested on "quantum Yang-Mills
    theory," which links the behavior of particles to structures found in
    geometry. The theory is breathtakingly elegant and useful--but no one
    has proved that it's sound.
    _________________________________________________________________

What Is the Biological Basis of Consciousness?

    Greg Miller

    For centuries, debating the nature of consciousness was the exclusive
    purview of philosophers. But if the recent torrent of books on the
    topic is any indication, a shift has taken place: Scientists are
    getting into the game.

    Has the nature of consciousness finally shifted from a philosophical
    question to a scientific one that can be solved by doing experiments?
    The answer, as with any related to this topic, depends on whom you
    ask. But scientific interest in this slippery, age-old question seems
    to be gathering momentum. So far, however, although theories abound,
    hard data are sparse.

    The discourse on consciousness has been hugely influenced by René
    Descartes, the French philosopher who in the mid-17th century declared
    that body and mind are made of different stuff entirely. It must be
    so, Descartes concluded, because the body exists in both time and
    space, whereas the mind has no spatial dimension.

    Recent scientifically oriented accounts of consciousness generally
    reject Descartes's solution; most prefer to treat body and mind as
    different aspects of the same thing. In this view, consciousness
    emerges from the properties and organization of neurons in the brain.
    But how? And how can scientists, with their devotion to objective
    observation and measurement, gain access to the inherently private and
    subjective realm of consciousness?

    Some insights have come from examining neurological patients whose
    injuries have altered their consciousness. Damage to certain
    evolutionarily ancient structures in the brainstem robs people of
    consciousness entirely, leaving them in a coma or a persistent
    vegetative state. Although these regions may be a master switch for
    consciousness, they are unlikely to be its sole source. Different
    aspects of consciousness are probably generated in different brain
    regions. Damage to visual areas of the cerebral cortex, for example,
    can produce strange deficits limited to visual awareness. One
    extensively studied patient, known as D.F., is unable to identify
    shapes or determine the orientation of a thin slot in a vertical disk.
    Yet when asked to pick up a card and slide it through the slot, she
    does so easily. At some level, D.F. must know the orientation of the
    slot to be able to do this, but she seems not to know she knows.

    Cleverly designed experiments can produce similar dissociations of
    unconscious and conscious knowledge in people without neurological
    damage. And researchers hope that scanning the brains of subjects
    engaged in such tasks will reveal clues about the neural activity
    required for conscious awareness. Work with monkeys also may elucidate
    some aspects of consciousness, particularly visual awareness. One
    experimental approach is to present a monkey with an optical illusion
    that creates a "bistable percept," looking like one thing one moment
    and another the next. (The orientation-flipping Necker cube is a
    well-known example.) Monkeys can be trained to indicate which version
    they perceive. At the same time, researchers hunt for neurons that
    track the monkey's perception, in hopes that these neurons will lead
    them to the neural systems involved in conscious visual awareness and
    ultimately to an explanation of how a particular pattern of photons
    hitting the retina produces the experience of seeing, say, a rose.

    Experiments under way at present generally address only pieces of the
    consciousness puzzle, and very few directly address the most enigmatic
    aspect of the conscious human mind: the sense of self. Yet the
    experimental work has begun, and if the results don't provide a
    blinding insight into how consciousness arises from tangles of
    neurons, they should at least refine the next round of questions.

    Ultimately, scientists would like to understand not just the
    biological basis of consciousness but also why it exists. What
    selection pressure led to its development, and how many of our fellow
    creatures share it? Some researchers suspect that consciousness is not
    unique to humans, but of course much depends on how the term is
    defined. Biological markers for consciousness might help settle the
    matter and shed light on how consciousness develops early in life.
    Such markers could also inform medical decisions about loved ones who
    are in an unresponsive state.

    Until fairly recently, tackling the subject of consciousness was a
    dubious career move for any scientist without tenure (and perhaps a
    Nobel Prize already in the bag). Fortunately, more young researchers
    are now joining the fray. The unanswered questions should keep
    them--and the printing presses--busy for many years to come.
    _________________________________________________________________

Why Do Humans Have So Few Genes?

    Elizabeth Pennisi

    When leading biologists were unraveling the sequence of the human
    genome in the late 1990s, they ran a pool on the number of genes
    contained in the 3 billion base pairs that make up our DNA. Few bets
    came close. The conventional wisdom a decade or so ago was that we
    need about 100,000 genes to carry out the myriad cellular processes
    that keep us functioning. But it turns out that we have only about
    25,000 genes--about the same number as a tiny flowering plant called
    Arabidopsis and barely more than the worm Caenorhabditis elegans.

    That big surprise reinforced a growing realization among geneticists:
    Our genomes and those of other mammals are far more flexible and
    complicated than they once seemed. The old notion of one gene/one
    protein has gone by the board: It is now clear that many genes can
    make more than one protein. Regulatory proteins, RNA, noncoding bits
    of DNA, even chemical and structural alterations of the genome itself
    control how, where, and when genes are expressed. Figuring out how all
    these elements work together to choreograph gene expression is one of
    the central challenges facing biologists.

    In the past few years, it has become clear that a phenomenon called
    alternative splicing is one reason human genomes can produce such
    complexity with so few genes. Human genes contain both coding
    DNA--exons--and noncoding DNA. In some genes, different combinations
    of exons can become active at different times, and each combination
    yields a different protein. Alternative splicing was long considered a
    rare hiccup during transcription, but researchers have concluded that
    it may occur in half--some say close to all--of our genes. That
    finding goes a long way toward explaining how so few genes can produce
    hundreds of thousands of different proteins. But how the transcription
    machinery decides which parts of a gene to read at any particular time
    is still largely a mystery.

    The same could be said for the mechanisms that determine which genes
    or suites of genes are turned on or off at particular times and
    places. Researchers are discovering that each gene needs a supporting
    cast of hundreds to get its job done. They include proteins that shut
    down or activate a gene, for example by adding acetyl or methyl groups
    to the DNA. Other proteins, called transcription factors, interact
    with the genes more directly: They bind to landing sites situated near
    the gene under their control. As with alternative splicing, activation
    of different combinations of landing sites makes possible exquisite
    control of gene expression, but researchers have yet to figure out
    exactly how all these regulatory elements really work or how they fit
    in with alternative splicing.

    Figure 1 Approximate number of genes

    In the past decade or so, researchers have also come to appreciate the
    key roles played by chromatin proteins and RNA in regulating gene
    expression. Chromatin proteins are essentially the packaging for DNA,
    holding chromosomes in well-defined spirals. By slightly changing
    shape, chromatin may expose different genes to the transcription
    machinery.

    Genes also dance to the tune of RNA. Small RNA molecules, many less
    than 30 bases, now share the limelight with other gene regulators.
    Many researchers who once focused on messenger RNA and other
    relatively large RNA molecules have in the past 5 years turned their
    attention to these smaller cousins, including microRNA and small
    nuclear RNA. Surprisingly, RNAs in these various guises shut down and
    otherwise alter gene expression. They also are key to cell
    differentiation in developing organisms, but the mechanisms are not
    fully understood.

    Researchers have made enormous strides in pinpointing these various
    mechanisms. By matching up genomes from organisms on different
    branches on the evolutionary tree, genomicists are locating regulatory
    regions and gaining insights into how mechanisms such as alternative
    splicing evolved. These studies, in turn, should shed light on how
    these regions work. Experiments in mice, such as the addition or
    deletion of regulatory regions and manipulating RNA, and computer
    models should also help. But the central question is likely to remain
    unsolved for a long time: How do all these features meld together to
    make us whole?
    _________________________________________________________________

To What Extent Are Genetic Variation and Personal Health Linked?

    Jennifer Couzin

    Forty years ago, doctors learned why some patients who received the
    anesthetic succinylcholine awoke normally but remained temporarily
    paralyzed and unable to breathe: They shared an inherited quirk that
    slowed their metabolism of the drug. Later, scientists traced sluggish
    succinylcholine metabolism to a particular gene variant. Roughly 1 in
    3500 people carry two deleterious copies, putting them at high risk of
    this distressing side effect.

    The solution to the succinylcholine mystery was among the first links
    drawn between genetic variation and an individual's response to drugs.
    Since then, a small but growing number of differences in drug
    metabolism have been linked to genetics, helping explain why some
    patients benefit from a particular drug, some gain nothing, and others
    suffer toxic side effects.

    The same sort of variation, it is now clear, plays a key role in
    individual risks of coming down with a variety of diseases. Gene
    variants have been linked to elevated risks for disorders from
    Alzheimer's disease to breast cancer, and they may help explain why,
    for example, some smokers develop lung cancer whereas many others
    don't.

    These developments have led to hopes--and some hype--that we are on
    the verge of an era of personalized medicine, one in which genetic
    tests will determine disease risks and guide prevention strategies and
    therapies. But digging up the DNA responsible--if in fact DNA is
    responsible--and converting that knowledge into gene tests that
    doctors can use remains a formidable challenge.

    Many conditions, including various cancers, heart attacks, lupus, and
    depression, likely arise when a particular mix of genes collides with
    something in the environment, such as nicotine or a fatty diet. These
    multigene interactions are subtler and knottier than the single gene
    drivers of diseases such as hemophilia and cystic fibrosis; spotting
    them calls for statistical inspiration and rigorous experiments
    repeated again and again to guard against introducing unproven gene
    tests into the clinic. And determining treatment strategies will be no
    less complex: Last summer, for example, a team of scientists linked
    124 different genes to resistance to four leukemia drugs.

    But identifying gene networks like these is only the beginning. One of
    the toughest tasks is replicating these studies--an especially
    difficult proposition in diseases that are not overwhelmingly
    heritable, such as asthma, or ones that affect fairly small patient
    cohorts, such as certain childhood cancers. Many clinical trials do
    not routinely collect DNA from volunteers, making it sometimes
    difficult for scientists to correlate disease or drug response with
    genes. Gene microarrays, which measure expression of dozens of genes
    at once, can be fickle and supply inconsistent results. Gene studies
    can also be prohibitively costly.

    Nonetheless, genetic dissection of some diseases--such as cancer,
    asthma, and heart disease--is galloping ahead. Progress in other
    areas, such as psychiatric disorders, is slower. Severely depressed or
    schizophrenic patients could benefit enormously from tests that reveal
    which drug and dose will help them the most, but unlike asthma, drug
    response can be difficult to quantify biologically, making gene-drug
    relations tougher to pin down.

    As DNA sequence becomes more available and technologies improve, the
    genetic patterns that govern health will likely come into sharper
    relief. Genetic tools still under construction, such as a haplotype
    map that will be used to discern genetic variation behind common
    diseases, could further accelerate the search for disease genes.

    The next step will be designing DNA tests to guide clinical
    decision-making--and using them. If history is any guide, integrating
    such tests into standard practice will take time. In emergencies--a
    heart attack, an acute cancer, or an asthma attack--such tests will be
    valuable only if they rapidly deliver results.

    Ultimately, comprehensive personalized medicine will come only if
    pharmaceutical companies want it to--and it will take enormous
    investments in research and development. Many companies worry that
    testing for genetic differences will narrow their market and squelch
    their profits.

    Still, researchers continue to identify new opportunities. In May, the
    Icelandic company deCODE Genetics reported that an experimental asthma
    drug that pharmaceutical giant Bayer had abandoned appeared to
    decrease the risk of heart attack in more than 170 patients who
    carried particular gene variants. The drug targets the protein
    produced by one of those genes. The finding is likely to be just a
    foretaste of the many surprises in store, as the braids binding DNA,
    drugs, and disease are slowly unwound.
    _________________________________________________________________

Can the Laws of Physics Be Unified?

    Charles Seife

    At its best, physics eliminates complexity by revealing underlying
    simplicity. Maxwell's equations, for example, describe all the
    confusing and diverse phenomena of classical electricity and magnetism
    by means of four simple rules. These equations are beautiful; they
    have an eerie symmetry, mirroring one another in an intricate dance of
    symbols. The four together feel as elegant, as whole, and as complete
    to a physicist as a Shakespearean sonnet does to a poet.

    The Standard Model of particle physics is an unfinished poem. Most of
    the pieces are there, and even unfinished, it is arguably the most
    brilliant opus in the literature of physics. With great precision, it
    describes all known matter--all the subatomic particles such as quarks
    and leptons--as well as the forces by which those particles interact
    with one another. These forces are electromagnetism, which describes
    how charged objects feel each other's influence: the weak force, which
    explains how particles can change their identities, and the strong
    force, which describes how quarks stick together to form protons and
    other composite particles. But as lovely as the Standard Model's
    description is, it is in pieces, and some of those pieces--those that
    describe gravity--are missing. It is a few shards of beauty that hint
    at something greater, like a few lines of Sappho on a fragment of
    papyrus.

    The beauty of the Standard Model is in its symmetry; mathematicians
    describe its symmetries with objects known as Lie groups. And a mere
    glimpse at the Standard Model's Lie group betrays its fragmented
    nature: SU(3) [mult.gif] SU(2) [mult.gif] U(1). Each of those pieces
    represents one type of symmetry, but the symmetry of the whole is
    broken. Each of the forces behaves in a slightly different way, so
    each is described with a slightly different symmetry.

    But those differences might be superficial. Electromagnetism and the
    weak force appear very dissimilar, but in the 1960s physicists showed
    that at high temperatures, the two forces "unify." It becomes apparent
    that electromagnetism and the weak force are really the same thing,
    just as it becomes obvious that ice and liquid water are the same
    substance if you warm them up together. This connection led physicists
    to hope that the strong force could also be unified with the other two
    forces, yielding one large theory described by a single symmetry such
    as SU(5).

    A unified theory should have observable consequences. For example, if
    the strong force truly is the same as the electroweak force, then
    protons might not be truly stable; once in a long while, they should
    decay spontaneously. Despite many searches, nobody has spotted a
    proton decay, nor has anyone sighted any particles predicted by some
    symmetry-enhancing modifications to the Standard Model, such as
    supersymmetry. Worse yet, even such a unified theory can't be
    complete--as long as it ignores gravity.

    Figure 1 Fundamental forces. A theory that ties all four forces
    together is still lacking.

    Gravity is a troublesome force. The theory that describes it, general
    relativity, assumes that space and time are smooth and continuous,
    whereas the underlying quantum physics that governs subatomic
    particles and forces is inherently discontinuous and jumpy. Gravity
    clashes with quantum theory so badly that nobody has come up with a
    convincing way to build a single theory that includes all the
    particles, the strong and electroweak forces, and gravity all in one
    big bundle. But physicists do have some leads. Perhaps the most
    promising is superstring theory.

    Superstring theory has a large following because it provides a way to
    unify everything into one large theory with a single symmetry--SO(32)
    for one branch of superstring theory, for example--but it requires a
    universe with 10 or 11 dimensions, scads of undetected particles, and
    a lot of intellectual baggage that might never be verifiable. It may
    be that there are dozens of unified theories, only one of which is
    correct, but scientists may never have the means to determine which.
    Or it may be that the struggle to unify all the forces and particles
    is a fool's quest.

    In the meantime, physicists will continue to look for proton decays,
    as well as search for supersymmetric particles in underground traps
    and in the Large Hadron Collider (LHC) in Geneva, Switzerland, when it
    comes online in 2007. Scientists believe that LHC will also reveal the
    existence of the Higgs boson, a particle intimately related to
    fundamental symmetries in the model of particle physics. And
    physicists hope that one day, they will be able to finish the
    unfinished poem and frame its fearful symmetry.
    _________________________________________________________________

How Much Can Human Life Span Be Extended?

    Jennifer Couzin

    When Jeanne Calment died in a nursing home in southern France in 1997,
    she was 122 years old, the longest-living human ever documented. But
    Calment's uncommon status will fade in subsequent decades if the
    predictions of some biologists and demographers come true. Life-span
    extension in species from yeast to mice and extrapolation from life
    expectancy trends in humans have convinced a swath of scientists that
    humans will routinely coast beyond 100 or 110 years of age. (Today, 1
    in 10,000 people in industrialized countries hold centenarian status.)
    Others say human life span may be far more limited. The elasticity
    found in other species might not apply to us. Furthermore, testing
    life-extension treatments in humans may be nearly impossible for
    practical and ethical reasons.

    Just 2 or 3 decades ago, research on aging was a backwater. But when
    molecular biologists began hunting for ways to prolong life, they
    found that life span was remarkably pliable. Reducing the activity of
    an insulinlike receptor more than doubles the life span of worms to a
    startling--for them--6 weeks. Put certain strains of mice on
    near-starvation but nutrient-rich diets, and they live 50% longer than
    normal.

    Some of these effects may not occur in other species. A worm's ability
    to enter a "dauer" state, which resembles hibernation, may be
    critical, for example. And shorter-lived species such as worms and
    fruit flies, whose aging has been delayed the most, may be more
    susceptible to life-span manipulation. But successful approaches are
    converging on a few key areas: calorie restriction; reducing levels of
    insulinlike growth factor 1 (IGF-1), a protein; and preventing
    oxidative damage to the body's tissues. All three might be
    interconnected, but so far that hasn't been confirmed (although
    calorie-restricted animals have low levels of IGF-1).

    Can these strategies help humans live longer? And how do we determine
    whether they will? Unlike drugs for cancer or heart disease, the
    benefits of antiaging treatments are fuzzier, making studies difficult
    to set up and to interpret. Safety is uncertain; calorie restriction
    reduces fertility in animals, and lab flies bred to live long can't
    compete with their wild counterparts. Furthermore, garnering
    results--particularly from younger volunteers, who may be likeliest to
    benefit because they've aged the least--will take so long that by the
    time results are in, those who began the study will be dead.

    That hasn't stopped scientists, some of whom have founded companies,
    from searching for treatments to slow aging. One intriguing question
    is whether calorie restriction works in humans. It's being tested in
    primates, and the National Institute on Aging in Bethesda, Maryland,
    is funding short-term studies in people. Volunteers in those trials
    have been on a stringent diet for up to 1 year while researchers
    monitor their metabolism and other factors that could hint at how
    they're aging.

    Insights could also come from genetic studies of centenarians, who may
    have inherited long life from their parents. Many scientists believe
    that average human life span has an inherent upper limit, although
    they don't agree on whether it's 85 or 100 or 150.

    One abiding question in the antiaging world is what the goal of all
    this work ought to be. Overwhelmingly, scientists favor treatments
    that will slow aging and stave off age-related diseases rather than
    simply extending life at its most decrepit. But even so, slowing aging
    could have profound social effects, upsetting actuarial tables and
    retirement plans.

    Then there's the issue of fairness: If antiaging therapies become
    available, who will receive them? How much will they cost? Individuals
    may find they can stretch their life spans. But that may be tougher to
    achieve for whole populations, although many demographers believe that
    the average life span will continue to climb as it has consistently
    for decades. If that happens, much of the increase may come from less
    dramatic strategies, such as heart disease and cancer prevention, that
    could also make the end of a long life more bearable.
    _________________________________________________________________

What Controls Organ Regeneration?

    R. John Davenport*

    Unlike automobiles, humans get along pretty well for most of their
    lives with their original parts. But organs do sometimes fail, and we
    can't go to the mechanic for an engine rebuild or a new water pump--at
    least not yet. Medicine has battled back many of the acute threats,
    such as infection, that curtailed human life in past centuries. Now,
    chronic illnesses and deteriorating organs pose the biggest drain on
    human health in industrialized nations, and they will only increase in
    importance as the population ages. Regenerative medicine--rebuilding
    organs and tissues--could conceivably be the 21st century equivalent
    of antibiotics in the 20th. Before that can happen, researchers must
    understand the signals that control regeneration.

    Researchers have puzzled for centuries over how body parts replenish
    themselves. In the mid-1700s, for instance, Swiss researcher Abraham
    Trembley noted that when chopped into pieces, hydra--tubelike
    creatures with tentacles that live in fresh water--could grow back
    into complete, new organisms. Other scientists of the era examined the
    salamander's ability to replace a severed tail. And a century later,
    Thomas Hunt Morgan scrutinized planaria, flatworms that can regenerate
    even when whittled into 279 bits. But he decided that regeneration was
    an intractable problem and forsook planaria in favor of fruit flies.

    Mainstream biology has followed in Morgan's wake, focusing on animals
    suitable for studying genetic and embryonic development. But some
    researchers have pressed on with studies of regeneration superstars,
    and they've devised innovative strategies to tackle the genetics of
    these organisms. These efforts and investigations of new regeneration
    models--such as zebrafish and special mouse lines--are beginning to
    reveal the forces that guide regeneration and those that prevent it.

    Animals exploit three principal strategies to regenerate organs.
    First, working organ cells that normally don't divide can multiply and
    grow to replenish lost tissue, as occurs in injured salamander hearts.
    Second, specialized cells can undo their training--a process known as
    dedifferentiation--and assume a more pliable form that can replicate
    and later respecialize to reconstruct a missing part. Salamanders and
    newts take this approach to heal and rebuild a severed limb, as do
    zebrafish to mend clipped fins. Finally, pools of stem cells can step
    in to perform required renovations. Planaria tap into this resource
    when reconstructing themselves.

    Figure 1 Self-repair. Newts reprogram their cells to reconstruct a
    severed limb.

    Humans already plug into these mechanisms to some degree. For
    instance, after surgical removal of part of a liver, healing signals
    tell remaining liver cells to resume growth and division to expand the
    organ back to its original size. Researchers have found that when
    properly enticed, some types of specialized human cells can revert to
    a more nascent state (see p. [38]85). And stem cells help replenish
    our blood, skin, and bones. So why do our hearts fill with scar
    tissue, our lenses cloud, and our brain cells perish?

    Animals such as salamanders and planaria regenerate tissues by
    rekindling genetic mechanisms that guide the patterning of body
    structures during embryonic development. We employ similar pathways to
    shape our parts as embryos, but over the course of evolution, humans
    may have lost the ability to tap into it as adults, perhaps because
    the cell division required for regeneration elevated the likelihood of
    cancer. And we may have evolved the capacity to heal wounds rapidly to
    repel infection, even though speeding the pace means more scarring.
    Regeneration pros such as salamanders heal wounds methodically and
    produce pristine tissue. Avoiding fibrotic tissue could mean the
    difference between regenerating and not: Mouse nerves grow vigorously
    if experimentally severed in a way that prevents scarring, but if a
    scar forms, nerves wither.

    Unraveling the mysteries of regeneration will depend on understanding
    what separates our wound-healing process from that of animals that are
    able to regenerate. The difference might be subtle: Researchers have
    identified one strain of mice that seals up ear holes in weeks,
    whereas typical strains never do. A relatively modest number of
    genetic differences seems to underlie the effect. Perhaps altering a
    handful of genes would be enough to turn us into superhealers, too.
    But if scientists succeed in initiating the process in humans, new
    questions will emerge. What keeps regenerating cells from running
    amok? And what ensures that regenerated parts are the right size and
    shape, and in the right place and orientation? If researchers can
    solve these riddles--and it's a big "if"--people might be able to
    order up replacement parts for themselves, not just their '67
    Mustangs.

    R. John Davenport is an editor of Science's SAGE KE.
    _________________________________________________________________

How Can a Skin Cell Become a Nerve Cell?

    Gretchen Vogel

    Like Medieval alchemists who searched for an elixir that could turn
    base metals into gold, biology's modern alchemists have learned how to
    use oocytes to turn normal skin cells into valuable stem cells, and
    even whole animals. Scientists, with practice, have now been able to
    make nuclear transfer nearly routine to produce cattle, cats, mice,
    sheep, goats, pigs, and--as a Korean team announced in May--even human
    embryonic stem (ES) cells. They hope to go still further and turn the
    stem cells into treatments for previously untreatable diseases. But
    like the medieval alchemists, today's cloning and stem cell biologists
    are working largely with processes they don't fully understand: What
    actually happens inside the oocyte to reprogram the nucleus is still a
    mystery, and scientists have a lot to learn before they can direct a
    cell's differentiation as smoothly as nature's program of development
    does every time fertilized egg gives rise to the multiple cell types
    that make up a live baby.

    Scientists have been investigating the reprogramming powers of the
    oocyte for half a century. In 1957, developmental biologists first
    discovered that they could insert the nucleus of adult frog cells into
    frog eggs and create dozens of genetically identical tadpoles. But in
    50 years, the oocyte has yet to give up its secrets.

    The answers lie deep in cell biology. Somehow, scientists know, the
    genes that control development--generally turned off in adult
    cells--get turned back on again by the oocyte, enabling the cell to
    take on the youthful potential of a newly fertilized egg. Scientists
    understand relatively little about these on-and-off switches in normal
    cells, however, let alone the unusual reversal that takes place during
    nuclear transfer.

    Figure 1 Cellular alchemist. A human oocyte.

    As cells differentiate, their DNA becomes more tightly packed, and
    genes that are no longer needed--or those which should not be
    expressed--are blocked. The DNA wraps tightly around proteins called
    histones, and genes are then tagged with methyl groups that prevent
    the proteinmaking machinery in the cell from reaching them. Several
    studies have shown that enzymes that remove those methyl groups are
    crucial for nuclear transfer to work. But they are far from the only
    things that are needed.

    If scientists could uncover the oocyte's secrets, it might be possible
    to replicate its tricks without using oocytes themselves, a resource
    that is fairly difficult to obtain and the use of which raises
    numerous ethical questions. If scientists could come up with a
    cell-free bath that turned the clock back on already-differentiated
    cells, the implications could be enormous. Labs could rejuvenate cells
    from patients and perhaps then grow them into new tissue that could
    repair parts worn out by old age or disease.

    But scientists are far from sure if such cell-free alchemy is
    possible. The egg's very structure, its scaffolding of proteins that
    guide the chromosomes during cell division, may also play a key role
    in turning on the necessary genes. If so, developing an elixir of
    proteins that can turn back a cell's clock may remain elusive.

    To really make use of the oocyte's power, scientists still need to
    learn how to direct the development of the rejuvenated stem cells and
    guide them into forming specific tissues. Stem cells, especially those
    from embryos, spontaneously form dozens of cell types, but controlling
    that development to produce a single type of cell has proved more
    difficult. Although some teams have managed to produce nearly pure
    colonies of certain kinds of neural cells from ES cells, no one has
    managed to concoct a recipe that will direct the cells to become, say,
    a pure population of dopamine-producing neurons that could replace
    those missing in Parkinson's disease.

    Scientists are just beginning to understand how cues interact to guide
    a cell toward its final destiny. Decades of work in developmental
    biology have provided a start: Biologists have used mutant frogs,
    flies, mice, chicks, and fish to identify some of the main genes that
    control a developing cell's decision to become a bone cell or a muscle
    cell. But observing what goes wrong when a gene is missing is easier
    than learning to orchestrate differentiation in a culture dish.
    Understanding how the roughly 25,000 human genes work together to form
    tissues--and tweaking the right ones to guide an immature cell's
    development--will keep researchers occupied for decades. If they
    succeed, however, the result will be worth far more than its weight in
    gold.
    _________________________________________________________________

How Does a Single Somatic Cell Become a Whole Plant?

    Gretchen Vogel

    It takes a certain amount of flexibility for a plant to survive and
    reproduce. It can stretch its roots toward water and its leaves toward
    sunlight, but it has few options for escaping predators or finding
    mates. To compensate, many plants have evolved repair mechanisms and
    reproductive strategies that allow them to produce offspring even
    without the meeting of sperm and egg. Some can reproduce from
    outgrowths of stems, roots, and bulbs, but others are even more
    radical, able to create new embryos from single somatic cells. Most
    citrus trees, for example, can form embryos from the tissues
    surrounding the unfertilized gametes--a feat no animal can manage. The
    house-plant Bryophyllum can sprout embryos from the edges of its
    leaves, a bit like Athena springing from Zeus's head.

    Nearly 50 years ago, scientists learned that they could coax carrot
    cells to undergo such embryogenesis in the lab. Since then, people
    have used so-called somatic embryogenesis to propagate dozens of
    species, including coffee, magnolias, mangos, and roses. A Canadian
    company has planted entire forests of fir trees that started life in
    tissue culture. But like researchers who clone animals (see p.
    [37]85), plant scientists understand little about what actually
    controls the process. The search for answers might shed light on how
    cells' fates become fixed during development, and how plants manage to
    retain such flexibility.

    Scientists aren't even sure which cells are capable of embryogenesis.
    Although earlier work assumed that all plant cells were equally
    labile, recent evidence suggests that only a subset of cells can
    transform into embryos. But what those cells look like before their
    transformation is a mystery. Researchers have videotaped cultures in
    which embryos develop but found no visual pattern that hints at which
    cells are about to sprout, and staining for certain patterns of gene
    expression has been inconclusive.

    Figure 1 Power of one. Orange tree embryos can sprout from a
    single somatic cell.

    Researchers do have a few clues about the molecules that might be
    involved. In the lab, the herbicide 2,4-dichlorophenoxyacetic acid
    (sold as weed killer and called 2,4-D) can prompt cells in culture to
    elongate, build a new cell wall, and start dividing to form embryos.
    The herbicide is a synthetic analog of the plant hormones called
    auxins, which control everything from the plant's response to light
    and gravity to the ripening of fruit. Auxins might also be important
    in natural somatic embryogenesis: Embryos that sprout on top of veins
    near the leaf edge are exposed to relatively high levels of auxins.
    Recent work has also shown that over- or underexpression of certain
    genes in Arabidopsis plants can prompt embryogenesis in otherwise
    normal-looking leaf cells.

    Sorting out sex-free embryogenesis might help scientists understand
    the cellular switches that plants use to stay flexible while still
    keeping growth under control. Developmental biologists are keen to
    learn how those mechanisms compare in plants and animals. Indeed, some
    of the processes that control somatic embryogenesis may be similar to
    those that occur during animal cloning or limb regeneration (see p.
    84).

    On a practical level, scientists would like to be able to use
    lab-propagation techniques on crop plants such as maize that still
    require normal pollination. That would speed up both breeding of new
    varieties and the production of hybrid seedlings--a flexibility that
    farmers and consumers could both appreciate.
    _________________________________________________________________

How Does Earth's Interior Work?

    Richard A. Kerr

    The plate tectonics revolution went only so deep. True, it made
    wonderful sense of most of the planet's geology. But that's something
    like understanding the face of Big Ben; there must be a lot more
    inside to understand about how and why it all works. In the case of
    Earth, there's another 6300 kilometers of rock and iron beneath the
    tectonic plates whose churnings constitute the inner workings of a
    planetary heat engine. Tectonic plates jostling about the surface are
    like the hands sweeping across the clock face: informative in many
    ways but largely mute as to what drives them.

    Figure 1 A long way to go. Grasping the workings of plate
    tectonics will require deeper probing.

    Earth scientists inherited a rather simple picture of Earth's interior
    from their pre-plate tectonics colleagues. Earth was like an onion.
    Seismic waves passing through the deep Earth suggested that beneath
    the broken skin of plates lies a 2800-kilometer layer of rocky mantle
    overlying 3470 kilometers of molten and--at the center--solid iron.
    The mantle was further subdivided at a depth of 670 kilometers into
    upper and lower layers, with a hint of a layer a couple of hundred
    kilometers thick at the bottom of the lower mantle.

    In the postrevolution era, the onion model continued to loom large.
    The dominant picture of Earth's inner workings divided the planet at
    the 670-kilometer depth, forming with the core a three-layer machine.
    Above the 670, the mantle churned slowly like a very shallow pot of
    boiling water, delivering heat and rock at mid-ocean ridges to make
    new crust and cool the interior and accepting cold sinking slabs of
    old plate at deep-sea trenches. A plume of hot rock might rise from
    just above the 670 to form a volcanic hot spot like Hawaii. But no hot
    rock rose up through the 670 barrier, and no cold rock sank down
    through it. Alternatively, argued a smaller contingent, the mantle
    churned from bottom to top like a deep stockpot, with plumes rising
    all the way from the core-mantle boundary.

    Forty years of probing inner Earth with ever more sophisticated
    seismic imaging has boosted the view of the engine's complexity
    without much calming the debate about how it works. Imaging now
    clearly shows that the 670 is no absolute barrier. Slabs penetrate the
    boundary, although with difficulty. Layered-earth advocates have duly
    dropped their impenetrable boundary to 1000 kilometers or deeper. Or
    maybe there's a flexible, semipermeable boundary somewhere that limits
    mixing to only the most insistent slabs or plumes.

    Now seismic imaging is also outlining two great globs of mantle rock
    standing beneath Africa and the Pacific like pistons. Researchers
    disagree whether they are hotter than average and rising under their
    own buoyancy, denser and sinking, or merely passively being carried
    upward by adjacent currents. Thin lenses of partially melted rock dot
    the mantle bottom, perhaps marking the bottom of plumes, or perhaps
    not. Geochemists reading the entrails of elements and isotopes in
    mantle-derived rocks find signs of five long-lived "reservoirs" that
    must have resisted mixing in the mantle for billions of years. But
    they haven't a clue where in the depths of the mantle those reservoirs
    might be hiding.

    How can we disassemble the increasingly complex planetary machine and
    find what makes it tick? With more of the same, plus a large dose of
    patience. After all, plate tectonics was more than a half-century in
    the making, and those revolutionaries had to look little deeper than
    the sea floor.

    Seismic imaging will continue to improve as better seismometers are
    spread more evenly about the globe. Seismic data are already
    distinguishing between temperature and compositional effects, painting
    an even more complex picture of mantle structure. Mineral physicists
    working in the lab will tease out more properties of rock under deep
    mantle conditions to inform interpretation of the seismic data,
    although still handicapped by the uncertain details of mantle
    composition. And modelers will more faithfully simulate the whole
    machine, drawing on seismics, mineral physics, and subtle geophysical
    observations such as gravity variations. Another 40 years should do
    it.
    _________________________________________________________________

Are We Alone in the Universe?

    Richard A. Kerr

    Alone, in all that space? Not likely. Just do the numbers: Several
    hundred billion stars in our galaxy, hundreds of billions of galaxies
    in the observable universe, and 150 planets spied already in the
    immediate neighborhood of the sun. That should make for plenty of
    warm, scummy little ponds where life could come together to begin
    billions of years of evolution toward technology-wielding creatures
    like ourselves. No, the really big question is when, if ever, we'll
    have the technological wherewithal to reach out and touch such
    intelligence. With a bit of luck, it could be in the next 25 years.

    Workers in the search for extraterrestrial intelligence (SETI) would
    have needed more than a little luck in the first 45 years of the
    modern hunt for like-minded colleagues out there. Radio astronomer
    Frank Drake's landmark Project Ozma was certainly a triumph of hope
    over daunting odds. In 1960, Drake pointed a 26-meter radio telescope
    dish in Green Bank, West Virginia, at two stars for a few days each.
    Given the vacuum-tube technology of the time, he could scan across 0.4
    megahertz of the microwave spectrum one channel at a time.

    Almost 45 years later, the SETI Institute in Mountain View,
    California, completed its 10-year-long Project Phoenix. Often using
    the 350-meter antenna at Arecibo, Puerto Rico, Phoenix researchers
    searched 710 star systems at 28 million channels simultaneously across
    an 1800-megahertz range. All in all, the Phoenix search was 100
    trillion times more effective than Ozma was.

    Besides stunning advances in search power, the first 45 years of
    modern SETI have also seen a diversification of search strategies. The
    Search for Extraterrestrial Radio Emissions from Nearby Developed
    Intelligent Populations (SERENDIP) has scanned billions of radio
    sources in the Milky Way by piggybacking receivers on antennas in use
    by observational astronomers, including Arecibo. And other groups are
    turning modest-sized optical telescopes to searching for nanosecond
    flashes from alien lasers.

    Figure 1 Listening for E.T. The SETI Institute is deploying an
    array of antennas and tying them into a giant "virtual telescope."

    Still, nothing has been heard. But then, Phoenix, for example, scanned
    just one or two nearby sunlike stars out of each 100 million stars out
    there. For such sparse sampling to work, advanced, broadcasting
    civilizations would have to be abundant, or searchers would have to
    get very lucky.

    To find the needle in a galaxy-size haystack, SETI workers are
    counting on the consistently exponential growth of computing power to
    continue for another couple of decades. In northern California, the
    SETI Institute has already begun constructing an array composed of
    individual 6-meter antennas. Ever-cheaper computer power will
    eventually tie 350 such antennas into "virtual telescopes," allowing
    scientists to search many targets at once. If Moore's law--that the
    cost of computation halves every 18 months--holds for another 15 years
    or so, SETI workers plan to use this antenna array approach to check
    out not a few thousand but perhaps a few million or even tens of
    millions of stars for alien signals. If there were just 10,000
    advanced civilizations in the galaxy, they could well strike pay dirt
    before Science turns 150.

    The technology may well be available in coming decades, but SETI will
    also need money. That's no easy task in a field with as high a "giggle
    factor" as SETI has. The U.S. Congress forced NASA to wash its hands
    of SETI in 1993 after some congressmen mocked the whole idea of
    spending federal money to look for "little green men with misshapen
    heads," as one of them put it. Searching for another tippy-top branch
    of the evolutionary tree still isn't part of the NASA vision. For more
    than a decade, private funding alone has driven SETI. But the SETI
    Institute's planned $35 million array is only a prototype of the
    Square Kilometer Array that would put those tens of millions of stars
    within reach of SETI workers. For that, mainstream radio astronomers
    will have to be onboard--or we'll be feeling alone in the universe a
    long time indeed.
    _________________________________________________________________

How and Where Did Life on Earth Arise?

    Carl Zimmer*

    For the past 50 years, scientists have attacked the question of how
    life began in a pincer movement. Some approach it from the present,
    moving backward in time from life today to its simpler ancestors.
    Others march forward from the formation of Earth 4.55 billion years
    ago, exploring how lifeless chemicals might have become organized into
    living matter.

    Working backward, paleontologists have found fossils of microbes
    dating back at least 3.4 billion years. Chemical analysis of even
    older rocks suggests that photosynthetic organisms were already well
    established on Earth by 3.7 billion years ago. Researchers suspect
    that the organisms that left these traces shared the same basic traits
    found in all life today. All free-living organisms encode genetic
    information in DNA and catalyze chemical reactions using proteins.
    Because DNA and proteins depend so intimately on each other for their
    survival, it's hard to imagine one of them having evolved first. But
    it's just as implausible for them to have emerged simultaneously out
    of a prebiotic soup.

    Experiments now suggest that earlier forms of life could have been
    based on a third kind of molecule found in today's organisms: RNA.
    Once considered nothing more than a cellular courier, RNA turns out to
    be astonishingly versatile, not only encoding genetic information but
    also acting like a protein. Some RNA molecules switch genes on and
    off, for example, whereas others bind to proteins and other molecules.
    Laboratory experiments suggest that RNA could have replicated itself
    and carried out the other functions required to keep a primitive cell
    alive.

    Only after life passed through this "RNA world," many scientists now
    agree, did it take on a more familiar cast. Proteins are thousands of
    times more efficient as a catalyst than RNA is, and so once they
    emerged they would have been favored by natural selection. Likewise,
    genetic information can be replicated from DNA with far fewer errors
    than it can from RNA.

    Other scientists have focused their efforts on figuring out how the
    lifeless chemistry of a prebiotic Earth could have given rise to an
    RNA world. In 1953, working at the University of Chicago, Stanley
    Miller and Harold Urey demonstrated that experiments could shed light
    on this question. They ran an electric current through a mix of
    ammonia, methane, and other gases believed at the time to have been
    present on early Earth. They found that they could produce amino acids
    and other important building blocks of life.

    Figure 1 Cauldron of life? Deep-sea vents are one proposed site
    for life's start.

    Today, many scientists argue that the early atmosphere was dominated
    by other gases, such as carbon dioxide. But experiments in recent
    years have shown that under these conditions, many building blocks of
    life can be formed. In addition, comets and meteorites may have
    delivered organic compounds from space.

    Just where on Earth these building blocks came together as primitive
    life forms is a subject of debate. Starting in the 1980s, many
    scientists argued that life got its start in the scalding,
    mineral-rich waters streaming out of deep-sea hydrothermal vents.
    Evidence for a hot start included studies on the tree of life, which
    suggested that the most primitive species of microbes alive today
    thrive in hot water. But the hot-start hypothesis has cooled off a
    bit. Recent studies suggest that heat-loving microbes are not living
    fossils. Instead, they may have descended from less hardy species and
    evolved new defenses against heat. Some skeptics also wonder how
    delicate RNA molecules could have survived in boiling water. No single
    strong hypothesis has taken the hot start's place, however, although
    suggestions include tidal pools or oceans covered by glaciers.

    Research projects now under way may shed more light on how life began.
    Scientists are running experiments in which RNA-based cells may be
    able to reproduce and evolve. NASA and the European Space Agency have
    launched probes that will visit comets, narrowing down the possible
    ingredients that might have been showered on early Earth.

    Most exciting of all is the possibility of finding signs of life on
    Mars. Recent missions to Mars have provided strong evidence that
    shallow seas of liquid water once existed on the Red
    Planet--suggesting that Mars might once have been hospitable to life.
    Future Mars missions will look for signs of life hiding in
    under-ground refuges, or fossils of extinct creatures. If life does
    turn up, the discovery could mean that life arose independently on
    both planets--suggesting that it is common in the universe--or that it
    arose on one planet and spread to the other. Perhaps martian microbes
    were carried to Earth on a meteorite 4 billion years ago, infecting
    our sterile planet.
                  __________________________________________

    Carl Zimmer is the author of Soul Made Flesh: The Discovery of the
    Brain--and How it Changed the World.
    _________________________________________________________________

What Determines Species Diversity?

    Elizabeth Pennisi

    Countless species of plants, animals, and microbes fill every crack
    and crevice on land and in the sea. They make the world go 'round,
    converting sunlight to energy that fuels the rest of life, cycling
    carbon and nitrogen between inorganic and organic forms, and modifying
    the landscape.

    In some places and some groups, hundreds of species exist, whereas in
    others, very few have evolved; the tropics, for example, are a complex
    paradise compared to higher latitudes. Biologists are striving to
    understand why. The interplay between environment and living organisms
    and between the organisms themselves play key roles in encouraging or
    discouraging diversity, as do human disturbances, predator-prey
    relationships, and other food web connections. But exactly how these
    and other forces work together to shape diversity is largely a
    mystery.

    The challenge is daunting. Baseline data are poor, for example: We
    don't yet know how many plant and animal species there are on Earth,
    and researchers can't even begin to predict the numbers and kinds of
    organisms that make up the microbial world. Researchers probing the
    evolution of, and limits to, diversity also lack a standardized time
    scale because evolution takes place over periods lasting from days to
    millions of years. Moreover, there can be almost as much variation
    within a species as between two closely related ones. Nor is it clear
    what genetic changes will result in a new species and what their true
    influence on speciation is.

    Understanding what shapes diversity will require a major
    interdisciplinary effort, involving paleontological interpretation,
    field studies, laboratory experimentation, genomic comparisons, and
    effective statistical analyses. A few exhaustive inventories, such as
    the United Nations' Millennium Project and an around-the-world
    assessment of genes from marine microbes, should improve baseline
    data, but they will barely scratch the surface. Models that predict
    when one species will split into two will help. And an emerging
    discipline called evo-devo is probing how genes involved in
    development contribute to evolution. Together, these efforts will go a
    long way toward clarifying the history of life.

    Paleontologists have already made headway in tracking the expansion
    and contraction of the ranges of various organisms over the millennia.
    They are finding that geographic distribution plays a key role in
    speciation. Future studies should continue to reveal large-scale
    patterns of distribution and perhaps shed more light on the origins of
    mass extinctions and the effects of these catastrophes on the
    evolution of new species.

    From field studies of plants and animals, researchers have learned
    that habitat can influence morphology and behavior--particularly
    sexual selection--in ways that hasten or slow down speciation.
    Evolutionary biologists have also discovered that speciation can stall
    out, for example, as separated populations become reconnected,
    homogenizing genomes that would otherwise diverge. Molecular forces,
    such as low mutation rates or meiotic drive--in which certain alleles
    have an increased likelihood of being passed from one generation to
    the next--influence the rate of speciation.

    And in some cases, differences in diversity can vary within an
    ecosystem: Edges of ecosystems sometimes support fewer species than
    the interior.

    Evolutionary biologists are just beginning to sort out how all these
    factors are intertwined in different ways for different groups of
    organisms. The task is urgent: Figuring out what shapes diversity
    could be important for understanding the nature of the wave of
    extinctions the world is experiencing and for determining strategies
    to mitigate it.
    _________________________________________________________________

What Genetic Changes Made Us Uniquely Human?

    Elizabeth Culotta

    Every generation of anthropologists sets out to explore what it is
    that makes us human. Famed paleoanthropologist Louis Leakey thought
    tools made the man, and so when he uncovered hominid bones near stone
    tools in Tanzania in the 1960s, he labeled the putative toolmaker Homo
    habilis, the earliest member of the human genus. But then
    primatologist Jane Goodall demonstrated that chimps also use tools of
    a sort, and today researchers debate whether H. habilis truly belongs
    in Homo. Later studies have honed in on traits such as bipedality,
    culture, language, humor, and, of course, a big brain as the unique
    birthright of our species. Yet many of these traits can also be found,
    at least to some degree, in other creatures: Chimps have rudimentary
    culture, parrots speak, and some rats seem to giggle when tickled.

    What is beyond doubt is that humans, like every other species, have a
    unique genome shaped by our evolutionary history. Now, for the first
    time, scientists can address anthropology's fundamental question at a
    new level: What are the genetic changes that make us human?

    With the human genome in hand and primate genome data beginning to
    pour in, we are entering an era in which it may become possible to
    pinpoint the genetic changes that help separate us from our closest
    relatives. A rough draft of the chimp sequence has already been
    released, and a more detailed version is expected soon. The genome of
    the macaque is nearly complete, the orangutan is under way, and the
    marmoset was recently approved. All these will help reveal the
    ancestral genotype at key places on the primate tree.

    The genetic differences revealed between humans and chimps are likely
    to be profound, despite the oft-repeated statistic that only about
    1.2% of our DNA differs from that of chimps. A change in every 100th
    base could affect thousands of genes, and the percentage difference
    becomes much larger if you count insertions and deletions. Even if we
    document all of the perhaps 40 million sequence differences between
    humans and chimps, what do they mean? Many are probably simply the
    consequence of 6 million years of genetic drift, with little effect on
    body or behavior, whereas other small changes--perhaps in regulatory,
    noncoding sequences--may have dramatic consequences.

    Half of the differences might define a chimp rather than a human. How
    can we sort them all out?

    One way is to zero in on the genes that have been favored by natural
    selection in humans. Studies seeking subtle signs of selection in the
    DNA of humans and other primates have identified dozens of genes, in
    particular those involved in host-pathogen interactions, reproduction,
    sensory systems such as olfaction and taste, and more.

    But not all of these genes helped set us apart from our ape cousins
    originally. Our genomes reveal that we have evolved in response to
    malaria, but malaria defense didn't make us human. So some researchers
    have started with clinical mutations that impair key traits, then
    traced the genes' evolution, an approach that has identified a handful
    of tantalizing genes. For example, MCPH1 and ASPM cause microcephaly
    when mutated, FOXP2 causes speech defects, and all three show signs of
    selection pressure during human, but not chimp, evolution. Thus they
    may have played roles in the evolution of humans' large brains and
    speech.

    But even with genes like these, it is often difficult to be completely
    sure of what they do. Knockout experiments, the classic way to reveal
    function, can't be done in humans and apes for ethical reasons. Much
    of the work will therefore demand comparative analyses of the genomes
    and phenotypes of large numbers of humans and apes. Already, some
    researchers are pushing for a "great ape 'phenome' project" to match
    the incoming tide of genomic data with more phenotypic information on
    apes. Other researchers argue that clues to function can best be
    gleaned by mining natural human variability, matching mutations in
    living people to subtle differences in biology and behavior. Both
    strategies face logistical and ethical problems, but some progress
    seems likely.

    A complete understanding of uniquely human traits will, however,
    include more than DNA. Scientists may eventually circle back to those
    long-debated traits of sophisticated language, culture, and
    technology, in which nurture as well as nature plays a leading role.
    We're in the age of the genome, but we can still recognize that it
    takes much more than genes to make the human.
    _________________________________________________________________

How Are Memories Stored and Retrieved?

    Greg Miller

    Packed into the kilogram or so of neural wetware between the ears is
    everything we know: a compendium of useful and trivial facts about the
    world, the history of our lives, plus every skill we've ever learned,
    from riding a bike to persuading a loved one to take out the trash.
    Memories make each of us unique, and they give continuity to our
    lives. Understanding how memories are stored in the brain is an
    essential step toward understanding ourselves.

    Neuroscientists have already made great strides, identifying key brain
    regions and potential molecular mechanisms. Still, many important
    questions remain unanswered, and a chasm gapes between the molecular
    and whole-brain research.

    The birth of the modern era of memory research is often pegged to the
    publication, in 1957, of an account of the neurological patient H.M.
    At age 27, H.M. had large chunks of the temporal lobes of his brain
    surgically removed in a last-ditch effort to relieve chronic epilepsy.
    The surgery worked, but it left H.M. unable to remember anything that
    happened--or anyone he met--after his surgery. The case showed that
    the medial temporal lobes (MTL), which include the hippocampus, are
    crucial for making new memories. H.M.'s case also revealed, on closer
    examination, that memory is not a monolith: Given a tricky mirror
    drawing task, H.M.'s performance improved steadily over 3 days even
    though he had no memory of his previous practice. Remembering how is
    not the same as remembering what, as far as the brain is concerned.

    Thanks to experiments on animals and the advent of human brain
    imaging, scientists now have a working knowledge of the various kinds
    of memory as well as which parts of the brain are involved in each.
    But persistent gaps remain. Although the MTL has indeed proved
    critical for declarative memory--the recollection of facts and
    events--the region remains something of a black box. How its various
    components interact during memory encoding and retrieval is
    unresolved. Moreover, the MTL is not the final repository of
    declarative memories. Such memories are apparently filed to the
    cerebral cortex for long-term storage, but how this happens, and how
    memories are represented in the cortex, remains unclear.

    More than a century ago, the great Spanish neuro-anatomist Santiago
    Ramòn y Cajal proposed that making memories must require neurons to
    strengthen their connections with one another. Dogma at the time held
    that no new neurons are born in the adult brain, so Ramòn y Cajal made
    the reasonable assumption that the key changes must occur between
    existing neurons. Until recently, scientists had few clues about how
    this might happen.

    Figure 1 Memorable diagram. Santiago Ramòn y Cajal's drawing of
    the hippocampus. He proposed that memories involve strengthened neural
    connections.

    Since the 1970s, however, work on isolated chunks of nervous-system
    tissue has identified a host of molecular players in memory formation.
    Many of the same molecules have been implicated in both declarative
    and nondeclarative memory and in species as varied as sea slugs, fruit
    flies, and rodents, suggesting that the molecular machinery for memory
    has been widely conserved. A key insight from this work has been that
    short-term memory (lasting minutes) involves chemical modifications
    that strengthen existing connections, called synapses, between
    neurons, whereas long-term memory (lasting days or weeks) requires
    protein synthesis and probably the construction of new synapses.

    Tying this work to the whole-brain research is a major challenge. A
    potential bridge is a process called long-term potentiation (LTP), a
    type of synaptic strengthening that has been scrutinized in slices of
    rodent hippocampus and is widely considered a likely physiological
    basis for memory. A conclusive demonstration that LTP really does
    underlie memory formation in vivo would be a big breakthrough.

    Meanwhile, more questions keep popping up. Recent studies have found
    that patterns of neural activity seen when an animal is learning a new
    task are replayed later during sleep. Could this play a role in
    solidifying memories? Other work shows that our memories are not as
    trustworthy as we generally assume. Why is memory so labile? A hint
    may come from recent studies that revive the controversial notion that
    memories are briefly vulnerable to manipulation each time they're
    recalled. Finally, the no-new-neurons dogma went down in flames in the
    1990s, with the demonstration that the hippocampus, of all places, is
    a virtual neuron nursery throughout life. The extent to which these
    newborn cells support learning and memory remains to be seen.
    _________________________________________________________________

How Did Cooperative Behavior Evolve?

    Elizabeth Pennisi

    When Charles Darwin was working out his grand theory on the origin of
    species, he was perplexed by the fact that animals from ants to people
    form social groups in which most individuals work for the common good.
    This seemed to run counter to his proposal that individual fitness was
    key to surviving over the long term.

    By the time he wrote The Descent of Man, however, he had come up with
    a few explanations. He suggested that natural selection could
    encourage altruistic behavior among kin so as to improve the
    reproductive potential of the "family." He also introduced the idea of
    reciprocity: that unrelated but familiar individuals would help each
    other out if both were altruistic. A century of work with dozens of
    social species has borne out his ideas to some degree, but the details
    of how and why cooperation evolved remain to be worked out. The
    answers could help explain human behaviors that seem to make little
    sense from a strict evolutionary perspective, such as risking one's
    life to save a drowning stranger.

    Animals help each other out in many ways. In social species from
    honeybees to naked mole rats, kinship fosters cooperation: Females
    forgo reproduction and instead help the dominant female with her
    young. And common agendas help unrelated individuals work together.
    Male chimpanzees, for example, gang up against predators, protecting
    each other at a potential cost to themselves.

    Generosity is pervasive among humans. Indeed, some anthropologists
    argue that the evolution of the tendency to trust one's relatives and
    neighbors helped humans become Earth's dominant vertebrate: The
    ability to work together provided our early ancestors more food,
    better protection, and better childcare, which in turn improved
    reproductive success.

    However, the degree of cooperation varies. "Cheaters" can gain a leg
    up on the rest of humankind, at least in the short term. But
    cooperation prevails among many species, suggesting that this behavior
    is a better survival strategy, over the long run, despite all the
    strife among ethnic, political, religious, even family groups now
    rampant within our species.

    Evolutionary biologists and animal behavior researchers are searching
    out the genetic basis and molecular drivers of cooperative behaviors,
    as well as the physiological, environmental, and behavioral impetus
    for sociality. Neuroscientists studying mammals from voles to hyenas
    are discovering key correlations between brain chemicals and social
    strategies.

    Others with a more mathematical bent are applying evolutionary game
    theory, a modeling approach developed for economics, to quantify
    cooperation and predict behavioral outcomes under different
    circumstances. Game theory has helped reveal a seemingly innate desire
    for fairness: Game players will spend time and energy to punish unfair
    actions, even though there's nothing to be gained by these actions for
    themselves. Similar studies have shown that even when two people meet
    just once, they tend to be fair to each other. Those actions are hard
    to explain, as they don't seem to follow the basic tenet that
    cooperation is really based on self-interest.

    The models developed through these games are still imperfect. They do
    not adequately consider, for example, the effect of emotions on
    cooperation. Nonetheless, with game theory's increasing
    sophistication, researchers hope to gain a clearer sense of the rules
    that govern complex societies.

    Together, these efforts are helping social scientists and others build
    on Darwin's observations about cooperation. As Darwin predicted,
    reciprocity is a powerful fitness tactic. But it is not a pervasive
    one.

    Modern researchers have discovered that a good memory is a
    prerequisite: It seems reciprocity is practiced only by organisms that
    can keep track of those who are helpful and those who are not. Humans
    have a great memory for faces and thus can maintain lifelong good--or
    hard--feelings toward people they don't see for years. Most other
    species exhibit reciprocity only over very short time scales, if at
    all.

    Limited to his personal observations, Darwin was able to come up with
    only general rationales for cooperative behavior. Now, with new
    insights from game theory and other promising experimental approaches,
    biologists are refining Darwin's ideas and, bit by bit, hope that one
    day they will understand just what it takes to bring out our
    cooperative spirit.
    _________________________________________________________________

How Will Big Pictures Emerge From a Sea of Biological Data?

    Elizabeth Pennisi

    Biology is rich in descriptive data--and getting richer all the time.
    Large-scale methods of probing samples, such as DNA sequencing,
    microarrays, and automated gene-function studies, are filling new
    databases to the brim. Many subfields from biomechanics to ecology
    have gone digital, and as a result, observations are more precise and
    more plentiful. A central question now confronting virtually all
    fields of biology is whether scientists can deduce from this torrent
    of molecular data how systems and whole organisms work. All this
    information needs to be sifted, organized, compiled, and--most
    importantly--connected in a way that enables researchers to make
    predictions based on general principles.

    Enter systems biology. Loosely defined and still struggling to find
    its way, this newly emerging approach aims to connect the dots that
    have emerged from decades of molecular, cellular, organismal, and even
    environmental observations. Its proponents seek to make biology more
    quantitative by relying on mathematics, engineering, and computer
    science to build a more rigid framework for linking disparate
    findings. They argue that it is the only way the field can move
    forward. And they suggest that biomedicine, particularly deciphering
    risk factors for disease, will benefit greatly.

    The field got a big boost from the completion of the human genome
    sequence. The product of a massive, trip-to-the-moon logistical
    effort, the sequence is now a hard and fast fact. The biochemistry of
    human inheritance has been defined and measured. And that has inspired
    researchers to try to make other aspects of life equally knowable.

    Molecular geneticists dream of having a similarly comprehensive view
    of networks that control genes: For example, they would like to
    identify rules explaining how a single DNA sequence can express
    different proteins, or varying amounts of protein, in different
    circumstances (see p. [36]80). Cell biologists would like to reduce
    the complex communication patterns traced by molecules that regulate
    the health of the cell to a set of signaling rules. Developmental
    biologists would like a comprehensive picture of how the embryo
    manages to direct a handful of cells into a myriad of specialized
    functions in bone, blood, and skin tissue. These hard puzzles can only
    be solved by systems biology, proponents say. The same can be said for
    neuroscientists trying to work out the emergent properties--higher
    thought, for example--hidden in complex brain circuits. To understand
    ecosystem changes, including global warming, ecologists need ways to
    incorporate physical as well as biological data into their thinking.

    Figure 1 Systems approach. Circuit diagrams help clarify nerve
    cell functions.

    Today, systems biologists have only begun to tackle relatively simple
    networks. They have worked out the metabolic pathway in yeast for
    breaking down galactose, a carbohydrate. Others have tracked the first
    few hours of the embryonic development of sea urchins and other
    organisms with the goal of seeing how various transcription factors
    alter gene expression over time. Researchers are also developing
    rudimentary models of signaling networks in cells and simple brain
    circuits.

    Progress is limited by the difficulty of translating biological
    patterns into computer models. Network computer programs themselves
    are relatively simple, and the methods of portraying the results in
    ways that researchers can understand and interpret need improving. New
    institutions around the world are gathering interdisciplinary teams of
    biologists, mathematicians, and computer specialists to help promote
    systems biology approaches. But it is still in its early days.

    No one yet knows whether intensive interdisciplinary work and improved
    computational power will enable researchers to create a comprehensive,
    highly structured picture of how life works.
    _________________________________________________________________

How Far Can We Push Chemical Self-Assembly?

    Robert F. Service

    Most physical scientists nowadays focus on uncovering nature's
    mysteries; chemists build things. There is no synthetic astronomy or
    synthetic physics, at least for now. But chemists thrive on finding
    creative new ways to assemble molecules. For the last 100 years, they
    have done that mostly by making and breaking the strong covalent bonds
    that form when atoms share electrons. Using that trick, they have
    learned to combine as many as 1000 atoms into essentially any
    molecular configuration they please.

    Impressive as it is, this level of complexity pales in comparison to
    what nature flaunts all around us. Everything from cells to cedar
    trees is knit together using a myriad of weaker links between small
    molecules. These weak interactions, such as hydrogen bonds, van der
    Waals forces, and [pi.gif] - [pi.gif] interactions, govern the
    assembly of everything from DNA in its famous double helix to the
    bonding of H[2]O molecules in liquid water. More than just riding herd
    on molecules, such subtle forces make it possible for structures to
    assemble themselves into an ever more complex hierarchy. Lipids
    coalesce to form cell membranes. Cells organize to form tissues.
    Tissues combine to create organisms. Today, chemists can't approach
    the complexity of what nature makes look routine. Will they ever learn
    to make complex structures that self-assemble?

    Well, they've made a start. Over the past 3 decades, chemists have
    made key strides in learning the fundamental rules of noncovalent
    bonding. Among these rules: Like prefers like. We see this in
    hydrophobic and hydrophilic interactions that propel lipid molecules
    in water to corral together to form the two-layer membranes that serve
    as the coatings surrounding cells. They bunch their oily tails
    together to avoid any interaction with water and leave their more
    polar head groups facing out into the liquid. Another rule:
    Self-assembly is governed by energetically favorable reactions. Leave
    the right component molecules alone, and they will assemble themselves
    into complex ordered structures.

    Chemists have learned to take advantage of these and other rules to
    design selfassembling systems with a modest degree of complexity.
    Drug-carrying liposomes, made with lipid bilayers resembling those in
    cells, are used commercially to ferry drugs to cancerous tissues in
    patients. And selfassembled molecules called rotaxanes, which can act
    as molecular switches that oscillate back and forth between two stable
    states, hold promise as switches in future molecular-based computers.

    But the need for increased complexity is growing, driven by the
    miniaturization of computer circuitry and the rise of nanotechnology.
    As features on computer chips continue to shrink, the cost of
    manufacturing these ever-smaller components is skyrocketing. Right
    now, companies make them by whittling materials down to the desired
    size. At some point, however, it will become cheaper to design and
    build them chemically from the bottom up.

    Self-assembly is also the only practical approach for building a wide
    variety of nanostructures. Making sure the components assemble
    themselves correctly, however, is not an easy task. Because the forces
    at work are so small, self-assembling molecules can get trapped in
    undesirable conformations, making defects all but impossible to avoid.
    Any new system that relies on self-assembly must be able either to
    tolerate those defects or repair them. Again, biology offers an
    example in DNA. When enzymes copy DNA strands during cell division,
    they invariably make mistakes--occasionally inserting an A when they
    should have inserted a T, for example. Some of those mistakes get by,
    but most are caught by DNA-repair enzymes that scan the newly
    synthesized strands and correct copying errors.

    Strategies like that won't be easy for chemists to emulate. But if
    they want to make complex, ordered structures from the ground up,
    they'll have to get used to thinking a bit more like nature.
    _________________________________________________________________

What Are the Limits of Conventional Computing?

    Charles Seife

    At first glance, the ultimate limit of computation seems to be an
    engineering issue. How much energy can you put in a chip without
    melting it? How fast can you flip a bit in your silicon memory? How
    big can you make your computer and still fit it in a room? These
    questions don't seem terribly profound.

    In fact, computation is more abstract and fundamental than figuring
    out the best way to build a computer. This realization came in the
    mid-1930s, when Princeton mathematicians Alonzo Church and Alan Turing
    showed--roughly speaking--that any calculation involving bits and
    bytes can be done on an idealized computer known as a Turing machine.
    By showing that all classical computers are essentially alike, this
    discovery enabled scientists and mathematicians to ask fundamental
    questions about computation without getting bogged down in the
    minutiae of computer architecture.

    For example, theorists can now classify computational problems into
    broad categories. P problems are those, broadly speaking, that can be
    solved quickly, such as alphabetizing a list of names. NP problems are
    much tougher to solve but relatively easy to check once you've reached
    an answer. An example is the traveling salesman problem, finding the
    shortest possible route through a series of locations. All known
    algorithms for getting an answer take lots of computing power, and
    even relatively small versions might be out of reach of any classical
    computer.

    Mathematicians have shown that if you could come up with a quick and
    easy shortcut to solving any one of the hardest type of NP problems,
    you'd be able to crack them all. In effect, the NP problems would turn
    into P problems. But it's uncertain whether such a shortcut
    exists--whether P = NP. Scientists think not, but proving this is one
    of the great unanswered questions in mathematics.

    In the 1940s, Bell Labs scientist Claude Shannon showed that bits are
    not just for computers; they are the fundamental units of describing
    the information that flows from one object to another. There are
    physical laws that govern how fast a bit can move from place to place,
    how much information can be transferred back and forth over a given
    communications channel, and how much energy it takes to erase a bit
    from memory. All classical information-processing machines are subject
    to these laws--and because information seems to be rattling back and
    forth in our brains, do the laws of information mean that our thoughts
    are reducible to bits and bytes? Are we merely computers? It's an
    unsettling thought.

    But there is a realm beyond the classical computer: the quantum. The
    probabilistic nature of quantum theory allows atoms and other quantum
    objects to store information that's not restricted to only the binary
    0 or 1 of information theory, but can also be 0 and 1 at the same
    time. Physicists around the world are building rudimentary quantum
    computers that exploit this and other quantum effects to do things
    that are provably impossible for ordinary computers, such as finding a
    target record in a database with too few queries. But scientists are
    still trying to figure out what quantum-mechanical properties make
    quantum computers so powerful and to engineer quantum computers big
    enough to do something useful.

    By learning the strange logic of the quantum world and using it to do
    computing, scientists are delving deep into the laws of the subatomic
    world. Perhaps something as seemingly mundane as the quest for
    computing power might lead to a newfound understanding of the quantum
    realm.
    _________________________________________________________________

Can We Selectively Shut Off Immune Responses?

    Jon Cohen

    In the past few decades, organ transplantation has gone from
    experimental to routine. In the United States alone, more than 20,000
    heart, liver, and kidney transplants are performed every year. But for
    transplant recipients, one prospect has remained unchanged: a lifetime
    of taking powerful drugs to suppress the immune system, a treatment
    that can have serious side effects. Researchers have long sought ways
    to induce the immune system to tolerate a transplant without blunting
    the body's entire defenses, but so far, they have had limited success.

    They face formidable challenges. Although immune tolerance can
    occur--in rare cases, transplant recipients who stop taking
    immunosuppressants have not rejected their foreign organs--researchers
    don't have a clear picture of what is happening at the molecular and
    cellular levels to allow this to happen. Tinkering with the immune
    system is also a bit like tinkering with a mechanical watch: Fiddle
    with one part, and you may disrupt the whole mechanism. And there is a
    big roadblock to testing drugs designed to induce tolerance: It is
    hard to know if they work unless immunosuppressant drugs are
    withdrawn, and that would risk rejection of the transplant. But if
    researchers can figure out how to train the immune system to tolerate
    transplants, the knowledge could have implications for the treatment
    of autoimmune diseases, which also result from unwanted immune
    attack--in these cases on some of the body's own tissues.

    A report in Science 60 years ago fired the starting gun in the race to
    induce transplant tolerance--a race that has turned into a marathon.
    Ray Owen of the University of Wisconsin, Madison, reported that
    fraternal twin cattle sometimes share a placenta and are born with
    each other's red blood cells, a state referred to as mixed chimerism.
    The cattle tolerated the foreign cells with no apparent problems.

    A few years later, Peter Medawar and his team at the University of
    Birmingham, U.K., showed that fraternal twin cattle with mixed
    chimerism readily accept skin grafts from each other. Medawar did not
    immediately appreciate the link to Owen's work, but when he saw the
    connection, he decided to inject fetal mice in utero with tissue from
    mice of a different strain. In a publication in Nature in 1953, the
    researchers showed that, after birth, some of these mice tolerated
    skin grafts from different strains. This influential experiment led
    many to devote their careers to transplantation and also raised hopes
    that the work would lead to cures for autoimmune diseases.

    Immunologists, many of them working with mice, have since spelled out
    several detailed mechanisms behind tolerance. The immune system can,
    for example, dispatch "regulatory" cells that suppress immune attacks
    against self. Or the system can force harmful immune cells to commit
    suicide or to go into a dysfunctional stupor called anergy.
    Researchers indeed now know fine details about the genes, receptors,
    and cell-to-cell communications that drive these processes.

    Yet it's one matter to unravel how the immune system works and another
    to figure out safe ways to manipulate it. Transplant researchers are
    pursuing three main strategies to induce tolerance. One builds on
    Medawar's experiments by trying to exploit chimerism. Researchers
    infuse the patient with the organ donor's bone marrow in hopes that
    the donor's immune cells will teach the host to tolerate the
    transplant; donor immune cells that come along with the transplanted
    organ also, some contend, can teach tolerance. A second strategy uses
    drugs to train T cells to become anergic or commit suicide when they
    see the foreign antigens on the transplanted tissue. The third
    approach turns up production of T regulatory cells, which prevent
    specific immune cells from copying themselves and can also suppress
    rejection by secreting biochemicals called cytokines that direct the
    immune orchestra to change its tune.

    All these strategies face a common problem: It is maddeningly diff
    icult to judge whether the approach has failed or succeeded because
    there are no reliable "biomarkers" that indicate whether a person has
    become tolerant to a transplant. So the only way to assess tolerance
    is to stop drug treatment, which puts the patient at risk of rejecting
    the organ. Similarly, ethical concerns often require researchers to
    test drugs aimed at inducing tolerance in concert with
    immunosuppressive therapy. This, in turn, can undermine the drugs'
    effectiveness because they need a fully functioning immune system to
    do their job.

    If researchers can complete their 50-year quest to induce immune
    tolerance safely and selectively, the prospects for hundreds of
    thousands of transplant recipients would be greatly improved, and so,
    too, might the prospects for controlling autoimmune diseases.
    _________________________________________________________________

Do Deeper Principles Underlie Quantum Uncertainty and Nonlocality?

    Charles Seife

    "Quantum mechanics is very impressive," Albert Einstein wrote in 1926.
    "But an inner voice tells me that it is not yet the real thing." As
    quantum theory matured over the years, that voice has gotten
    quieter--but it has not been silenced. There is a relentless murmur of
    confusion underneath the chorus of praise for quantum theory.

    Quantum theory was born at the very end of the 19th century and soon
    became one of the pillars of modern physics. It describes, with
    incredible precision, the bizarre and counterintuitive behavior of the
    very small: atoms and electrons and other wee beasties of the
    submicroscopic world. But that success came with the price of
    discomfort. The equations of quantum mechanics work very well; they
    just don't seem to make sense.

    No matter how you look at the equations of quantum theory, they allow
    a tiny object to behave in ways that defy intuition. For example, such
    an object can be in "superposition": It can have two mutually
    exclusive properties at the same time. The mathematics of quantum
    theory says that an atom, for example, can be on the left side of a
    box and the right side of the box at the very same instant, as long as
    the atom is undisturbed and unobserved. But as soon as an observer
    opens the box and tries to spot where the atom is, the superposition
    collapses and the atom instantly "chooses" whether to be on the right
    or the left.

    This idea is almost as unsettling today as it was 80 years ago, when
    Erwin Schrödinger ridiculed superposition by describing a half living,
    half-dead cat. That is because quantum theory changes what the meaning
    of "is" is. In the classical world, an object has a solid reality:
    Even a cloud of gas is well described by hard little billiard
    ball-like pieces, each of which has a well-defined position and
    velocity. Quantum theory seems to undermine that solid reality.
    Indeed, the famous Uncertainty Principle, which arises directly from
    the mathematics of quantum theory, says that objects' positions and
    moment a are smeary and ill defined, and gaining knowledge about one
    implies losing knowledge about the other.

    The early quantum physicists dealt with this unreality by saying that
    the "is"--the fundamental objects handled by the equations of quantum
    theory--were not actually particles that had an extrinsic reality but
    "probability waves" that merely had the capability of becoming "real"
    when an observer makes a measurement. This so-called Copenhagen
    Interpretation makes sense, if you're willing to accept that reality
    is probability waves and not solid objects. Even so, it still doesn't
    sufficiently explain another weirdness of quantum theory: nonlocality.

    In 1935, Einstein came up with a scenario that still defies common
    sense. In his thought experiment, two particles fly away from each
    other and wind up at opposite ends of the galaxy. But the two
    particles happen to be "entangled"--linked in a quantum-mechanical
    sense--so that one particle instantly "feels" what happens to its
    twin. Measure one, and the other is instantly transformed by that
    measurement as well; it's as if the twins mystically communicate,
    instantly, over vast regions of space. This "nonlocality" is a
    mathematical consequence of quantum theory and has been measured in
    the lab. The spooky action apparently ignores distance and the flow of
    time; in theory, particles can be entangled after their entanglement
    has already been measured.

    On one level, the weirdness of quantum theory isn't a problem at all.
    The mathematical framework is sound and describes all these bizarre
    phenomena well. If we humans can't imagine a physical reality that
    corresponds to our equations, so what? That attitude has been called
    the "shut up and calculate" interpretation of quantum mechanics. But
    to others, our difficulties in wrapping our heads around quantum
    theory hint at greater truths yet to be understood.

    Some physicists in the second group are busy trying to design
    experiments that can get to the heart of the weirdness of quantum
    theory. They are slowly testing what causes quantum superpositions to
    "collapse"--research that may gain insight into the role of
    measurement in quantum theory as well as into why big objects behave
    so differently from small ones. Others are looking for ways to test
    various explanations for the weirdnesses of quantum theory, such as
    the "many worlds" interpretation, which explains superposition,
    entanglement, and other quantum phenomena by positing the existence of
    parallel universes. Through such efforts, scientists might hope to get
    beyond the discomfort that led Einstein to declare that "[God] does
    not play dice."
    _________________________________________________________________

Is an Effective HIV Vaccine Feasible?

    Jon Cohen

    In the 2 decades since researchers identified HIV as the cause of
    AIDS, more money has been spent on the search for a vaccine against
    the virus than on any vaccine effort in history. The U.S. National
    Institutes of Health alone invests nearly $500 million each year, and
    more than 50 different preparations have entered clinical trials. Yet
    an effective AIDS vaccine, which potentially could thwart millions of
    new HIV infections each year, remains a distant dream.

    Although AIDS researchers have turned the virus inside-out and
    carefully detailed how it destroys the immune system, they have yet to
    unravel which immune responses can fend off an infection. That means,
    as one AIDS vaccine researcher famously put it more than a decade ago,
    the field is "flying without a compass."

    Some skeptics contend that no vaccine will ever stop HIV. They argue
    that the virus replicates so quickly and makes so many mistakes during
    the process that vaccines can't possibly fend off all the types of HIV
    that exist. HIV also has developed sophisticated mechanisms to dodge
    immune attack, shrouding its surface protein in sugars to hide
    vulnerable sites from antibodies and producing proteins that thwart
    production of other immune warriors. And the skeptics point out that
    vaccine developers have had little success against pathogens like HIV
    that routinely outwit the immune system--the malaria parasite,
    hepatitis C virus, and the tuberculosis bacillus are prime examples.

    Yet AIDS vaccine researchers have solid reasons to believe they can
    succeed. Monkey experiments have shown that vaccines can protect
    animals from SIV, a simian relative of HIV. Several studies have
    identified people who repeatedly expose themselves to HIV but remain
    uninfected, suggesting that something is stopping the virus. A small
    percentage of people who do become infected never seem to suffer any
    harm, and others hold the virus at bay for a decade or more before
    showing damage to their immune systems. Scientists also have found
    that some rare antibodies do work powerfully against the virus in test
    tube experiments.

    At the start, researchers pinned their hopes on vaccines designed to
    trigger production of antibodies against HIV's surface protein. The
    approach seemed promising because HIV uses the surface protein to
    latch onto white blood cells and establish an infection. But vaccines
    that only contained HIV's surface protein looked lackluster in animal
    and test tube studies, and then proved worthless in large-scale
    clinical trials.

    Now, researchers are intensely investigating other approaches. When
    HIV manages to thwart antibodies and establish an infection, a second
    line of defense, cellular immunity, specifically targets and
    eliminates HIV-infected cells. Several vaccines which are now being
    tested aim to stimulate production of killer cells, the storm troopers
    of the cellular immune system. But cellular immunity involves other
    players--such as macrophages, the network of chemical messengers
    called cytokines, and so-called natural killer cells--that have
    received scant attention.

    The hunt for an antibody-based vaccine also is going through something
    of a renaissance, although it's requiring researchers to think
    backward. Vaccine researchers typically start with antigens--in this
    case, pieces of HIV--and then evaluate the antibodies they elicit. But
    now researchers have isolated more than a dozen antibodies from
    infected people that have blocked HIV infection in test tube
    experiments. The trick will be to figure out which specific antigens
    triggered their production.

    It could well be that a successful AIDS vaccine will need to stimulate
    both the production of antibodies and cellular immunity, a strategy
    many are attempting to exploit. Perhaps the key will be stimulating
    immunity at mucosal surfaces, where HIV typically enters. It's even
    possible that researchers will discover an immune response that no one
    knows about today. Or perhaps the answer lies in the interplay between
    the immune system and human genetic variability: Studies have
    highlighted genes that strongly influence who is most susceptible--and
    who is most resistant--to HIV infection and disease.

    Wherever the answer lies, the insights could help in the development
    of vaccines against other diseases that, like HIV, don't easily
    succumb to immune attack and that kill millions of people. Vaccine
    developers for these diseases will probably also have to look in
    unusual places for answers. The maps created by AIDS vaccine
    researchers currently exploring uncharted immunologic terrain could
    prove invaluable.


More information about the paleopsych mailing list