[Paleopsych] Scientific American: Inconstant Constants

Premise Checker checker at panix.com
Sat Jul 2 15:24:34 UTC 2005


Inconstant Constants
http://www.sciam.com/print_version.cfm?articleID=0005BFE6-2965-128A-A96583414B7F0000
5.5.23

    Do the inner workings of nature change with time?
    By John D. Barrow and John K. Webb

    Some things never change. Physicists call them the constants of
    nature. Such quantities as the velocity of light, c, Newton's constant
    of gravitation, G, and the mass of the electron, m[e], are assumed to
    be the same at all places and times in the universe. They form the
    scaffolding around which the theories of physics are erected, and they
    define the fabric of our universe. Physics has progressed by making
    ever more accurate measurements of their values.

    And yet, remarkably, no one has ever successfully predicted or
    explained any of the constants. Physicists have no idea why they take
    the special numerical values that they do. In SI units, c is
    299,792,458; G is 6.673 X 10^-11; and m[e] is 9.10938188 X
    10^-31--numbers that follow no discernible pattern. The only thread
    running through the values is that if many of them were even slightly
    different, complex atomic structures such as living beings would not
    be possible. The desire to explain the constants has been one of the
    driving forces behind efforts to develop a complete unified
    description of nature, or "theory of everything." Physicists have
    hoped that such a theory would show that each of the constants of
    nature could have only one logically possible value. It would reveal
    an underlying order to the seeming arbitrariness of nature.

    In recent years, however, the status of the constants has grown more
    muddled, not less. Researchers have found that the best candidate for
    a theory of everything, the variant of string theory called M-theory,
    is self-consistent only if the universe has more than four dimensions
    of space and time--as many as seven more. One implication is that the
    constants we observe may not, in fact, be the truly fundamental ones.
    Those live in the full higher-dimensional space, and we see only their
    three-dimensional "shadows."

    Meanwhile physicists have also come to appreciate that the values of
    many of the constants may be the result of mere happenstance, acquired
    during random events and elementary particle processes early in the
    history of the universe. In fact, string theory allows for a vast
    number--10^500--of possible "worlds" with different self-consistent
    sets of laws and constants [see "The String Theory Landscape," by
    Raphael Bousso and Joseph Polchinski; Scientific American, September
    2004]. So far researchers have no idea why our combination was
    selected. Continued study may reduce the number of logically possible
    worlds to one, but we have to remain open to the unnerving possibility
    that our known universe is but one of many--a part of a
    multiverse--and that different parts of the multiverse exhibit
    different solutions to the theory, our observed laws of nature being
    merely one edition of many systems of local bylaws [see "Parallel
    Universes," by Max Tegmark; Scientific American, May 2003].

    No further explanation would then be possible for many of our
    numerical constants other than that they constitute a rare combination
    that permits consciousness to evolve. Our observable universe could be
    one of many isolated oases surrounded by an infinity of lifeless
    space--a surreal place where different forces of nature hold sway and
    particles such as electrons or structures such as carbon atoms and DNA
    molecules could be impossibilities. If you tried to venture into that
    outside world, you would cease to be.

    Thus, string theory gives with the right hand and takes with the left.
    It was devised in part to explain the seemingly arbitrary values of
    the physical constants, and the basic equations of the theory contain
    few arbitrary parameters. Yet so far string theory offers no
    explanation for the observed values of the constants.

    A Ruler You Can Trust

    Indeed, the word "constant" may be a misnomer. Our constants could
    vary both in time and in space. If the extra dimensions of space were
    to change in size, the "constants" in our three-dimensional world
    would change with them. And if we looked far enough out in space, we
    might begin to see regions where the "constants" have settled into
    different values. Ever since the 1930s, researchers have speculated
    that the constants may not be constant. String theory gives this idea
    a theoretical plausibility and makes it all the more important for
    observers to search for deviations from constancy.

    Such experiments are challenging. The first problem is that the
    laboratory apparatus itself may be sensitive to changes in the
    constants. The size of all atoms could be increasing, but if the ruler
    you are using to measure them is getting longer, too, you would never
    be able to tell. Experimenters routinely assume that their reference
    standards--rulers, masses, clocks--are fixed, but they cannot do so
    when testing the constants. They must focus their attention on
    constants that have no units--they are pure numbers--so that their
    values are the same irrespective of the units system. An example is
    the ratio of two masses, such as the proton mass to the electron mass.

    One ratio of particular interest combines the velocity of light, c,
    the electric charge on a single electron, e, Planck's constant, h, and
    the so-called vacuum permittivity, [varepsilon.gif] [0]. This famous
    quantity, [alpha.gif] = e^2/2 [varepsilon.gif] [0]hc, called the
    fine-structure constant, was first introduced in 1916 by Arnold
    Sommerfeld, a pioneer in applying the theory of quantum mechanics to
    electromagnetism. It quantifies the relativistic (c) and quantum (h)
    qualities of electromagnetic (e) interactions involving charged
    particles in empty space ( [varepsilon.gif] [0]). Measured to be equal
    to 1/137.03599976, or approximately 1/137, [alpha.gif] has endowed the
    number 137 with a legendary status among physicists (it usually opens
    the combination locks on their briefcases).

    If [alpha.gif] had a different value, all sorts of vital features of
    the world around us would change. If the value were lower, the density
    of solid atomic matter would fall (in proportion to [alpha.gif] ^3),
    molecular bonds would break at lower temperatures ( [alpha.gif] ^2),
    and the number of stable elements in the periodic table could increase
    (1/ [alpha.gif] ). If were too big, small atomic nuclei could not
    exist, because the electrical repulsion of their protons would
    overwhelm the strong nuclear force binding them together. A value as
    big as 0.1 would blow apart carbon.

    The nuclear reactions in stars are especially sensitive to [alpha.gif]
    . For fusion to occur, a star's gravity must produce temperatures high
    enough to force nuclei together despite their tendency to repel one
    another. If [alpha.gif] exceeded 0.1, fusion would be impossible
    (unless other parameters, such as the electron-to-proton mass ratio,
    were adjusted to compensate). A shift of just 4 percent in would alter
    the energy levels in the nucleus of carbon to such an extent that the
    production of this element by stars would shut down.

    Nuclear Proliferation
    The second experimental problem, less easily solved, is that measuring
    changes in the constants requires high-precision equipment that
    remains stable long enough to register any changes. Even atomic clocks
    can detect drifts in the fine-structure constant only over days or, at
    most, years. If [alpha.gif] changed by more than four parts in 10^15
    over a three-year period, the best clocks would see it. None have.
    That may sound like an impressive confirmation of constancy, but three
    years is a cosmic eyeblink. Slow but substantial changes during the
    long history of the universe would have gone unnoticed.

    Fortunately, physicists have found other tests. During the 1970s,
    scientists from the French atomic energy commission noticed something
    peculiar about the isotopic composition of ore from a uranium mine at
    Oklo in Gabon, West Africa: it looked like the waste products of a
    nuclear reactor. About two billion years ago, Oklo must have been the
    site of a natural reactor [see "A Natural Fission Reactor," by George
    A. Cowan; Scientific American, July 1976].

    In 1976 Alexander Shlyakhter of the Nuclear Physics Institute in St.
    Petersburg, Russia, noticed that the ability of a natural reactor to
    function depends crucially on the precise energy of a particular state
    of the samarium nucleus that facilitates the capture of neutrons. And
    that energy depends sensitively on the value of [alpha.gif] . So if
    the fine-structure constant had been slightly different, no chain
    reaction could have occurred. But one did occur, which implies that
    the constant has not changed by more than one part in 10^8 over the
    past two billion years. (Physicists continue to debate the exact
    quantitative results because of the inevitable uncertainties about the
    conditions inside the natural reactor.)

    In 1962 P. James E. Peebles and Robert Dicke of Princeton University
    first applied similar principles to meteorites: the abundance ratios
    arising from the radioactive decay of different isotopes in these
    ancient rocks depend on [alpha.gif] . The most sensitive constraint
    involves the beta decay of rhenium into osmium. According to recent
    work by Keith Olive of the University of Minnesota, Maxim Pospelov of
    the University of Victoria in British Columbia and their colleagues,
    at the time the rocks formed, was within two parts in 10^6 of its
    current value. This result is less precise than the Oklo data but goes
    back further in time, to the origin of the solar system 4.6 billion
    years ago.

    To probe possible changes over even longer time spans, researchers
    must look to the heavens. Light takes billions of years to reach our
    telescopes from distant astronomical sources. It carries a snapshot of
    the laws and constants of physics at the time when it started its
    journey or encountered material en route.

    Line Editing

    Astronomy first entered the constants story soon after the discovery
    of quasars in 1965. The idea was simple. Quasars had just been
    discovered and identified as bright sources of light located at huge
    distances from Earth. Because the path of light from a quasar to us is
    so long, it inevitably intersects the gaseous outskirts of young
    galaxies. That gas absorbs the quasar light at particular frequencies,
    imprinting a bar code of narrow lines onto the quasar spectrum.

    Whenever gas absorbs light, electrons within the atoms jump from a low
    energy state to a higher one. These energy levels are determined by
    how tightly the atomic nucleus holds the electrons, which depends on
    the strength of the electromagnetic force between them--and therefore
    on the fine-structure constant. If the constant was different at the
    time when the light was absorbed or in the particular region of the
    universe where it happened, then the energy required to lift the
    electron would differ from that required today in laboratory
    experiments, and the wavelengths of the transitions seen in the
    spectra would differ. The way in which the wavelengths change depends
    critically on the orbital configuration of the electrons. For a given
    change in [alpha.gif] , some wavelengths shrink, whereas others
    increase. The complex pattern of effects is hard to mimic by data
    calibration errors, which makes the test astonishingly powerful.

    Before we began our work seven years ago, attempts to perform the
    measurement had suffered from two limitations. First, laboratory
    researchers had not measured the wavelengths of many of the relevant
    spectral lines with sufficient precision. Ironically, scientists used
    to know more about the spectra of quasars billions of light-years away
    than about the spectra of samples here on Earth. We needed
    high-precision laboratory measurements against which to compare the
    quasar spectra, so we persuaded experimenters to undertake them.
    Initial measurements were done by Anne Thorne and Juliet Pickering of
    Imperial College London, followed by groups led by Sveneric Johansson
    of Lund Observatory in Sweden and Ulf Griesmann and Rainer Kling of
    the National Institute of Standards and Technology in Maryland.

    The second problem was that previous observers had used so-called
    alkali-doublet absorption lines--pairs of absorption lines arising
    from the same gas, such as carbon or silicon. They compared the
    spacing between these lines in quasar spectra with laboratory
    measurements. This method, however, failed to take advantage of one
    particular phenomenon: a change in [alpha.gif] shifts not just the
    spacing of atomic energy levels relative to the lowest-energy level,
    or ground state, but also the position of the ground state itself. In
    fact, this second effect is even stronger than the first.
    Consequently, the highest precision observers achieved was only about
    one part in 10^4.

    In 1999 one of us (Webb) and Victor V. Flambaum of the University of
    New South Wales in Australia came up with a method to take both
    effects into account. The result was a breakthrough: it meant 10 times
    higher sensitivity. Moreover, the method allows different species (for
    instance, magnesium and iron) to be compared, which allows additional
    cross-checks. Putting this idea into practice took complicated
    numerical calculations to establish exactly how the observed
    wavelengths depend on [alpha.gif] in all different atom types.
    Combined with modern telescopes and detectors, the new approach, known
    as the many-multiplet method, has enabled us to test the constancy of
    [alpha.gif] with unprecedented precision.

    Changing Minds

    When embarking on this project, we anticipated establishing that the
    value of the fine-structure constant long ago was the same as it is
    today; our contribution would simply be higher precision. To our
    surprise, the first results, in 1999, showed small but statistically
    significant differences. Further data confirmed this finding. Based on
    a total of 128 quasar absorption lines, we found an average increase
    in [alpha.gif] of close to six parts in a million over the past six
    billion to 12 billion years.

    Extraordinary claims require extraordinary evidence, so our immediate
    thoughts turned to potential problems with the data or the analysis
    methods. These uncertainties can be classified into two types:
    systematic and random. Random uncertainties are easier to understand;
    they are just that--random. They differ for each individual
    measurement but average out to be close to zero over a large sample.
    Systematic uncertainties, which do not average out, are harder to deal
    with. They are endemic in astronomy. Laboratory experimenters can
    alter their instrumental setup to minimize them, but astronomers
    cannot change the universe, and so they are forced to accept that all
    their methods of gathering data have an irremovable bias. For example,
    any survey of galaxies will tend to be overrepresented by bright
    galaxies because they are easier to see. Identifying and neutralizing
    these biases is a constant challenge.

    The first one we looked for was a distortion of the wavelength scale
    against which the quasar spectral lines were measured. Such a
    distortion might conceivably be introduced, for example, during the
    processing of the quasar data from their raw form at the telescope
    into a calibrated spectrum. Although a simple linear stretching or
    compression of the wavelength scale could not precisely mimic a change
    in [alpha.gif] , even an imprecise mimicry might be enough to explain
    our results. To test for problems of this kind, we substituted
    calibration data for the quasar data and analyzed them, pretending
    they were quasar data. This experiment ruled out simple distortion
    errors with high confidence.

    For more than two years, we put up one potential bias after another,
    only to rule it out after detailed investigation as too small an
    effect. So far we have identified just one potentially serious source
    of bias. It concerns the absorption lines produced by the element
    magnesium. Each of the three stable isotopes of magnesium absorbs
    light of a different wavelength, but the three wavelengths are very
    close to one another, and quasar spectroscopy generally sees the three
    lines blended as one. Based on laboratory measurements of the relative
    abundances of the three isotopes, researchers infer the contribution
    of each. If these abundances in the young universe differed
    substantially--as might have happened if the stars that spilled
    magnesium into their galaxies were, on average, heavier than their
    counterparts today--those differences could simulate a change in
    [alpha.gif] .

    But a study published this year indicates that the results cannot be
    so easily explained away. Yeshe Fenner and Brad K. Gibson of Swinburne
    University of Technology in Australia and Michael T. Murphy of the
    University of Cambridge found that matching the isotopic abundances to
    emulate a variation in [alpha.gif] also results in the overproduction
    of nitrogen in the early universe--in direct conflict with
    observations. If so, we must confront the likelihood that really has
    been changing.

    The scientific community quickly realized the immense potential
    significance of our results. Quasar spectroscopists around the world
    were hot on the trail and rapidly produced their own measurements. In
    2003 teams led by Sergei Levshakov of the Ioffe Physico-Technical
    Institute in St. Petersburg, Russia, and Ralf Quast of the University
    of Hamburg in Germany investigated three new quasar systems. Last year
    Hum Chand and Raghunathan Srianand of the Inter-University Center for
    Astronomy and Astrophysics in India, Patrick Petitjean of the
    Institute of Astrophysics and Bastien Aracil of LERMA in Paris
    analyzed 23 more. None of these groups saw a change in [alpha.gif] .
    Chand argued that any change must be less than one part in 10^6 over
    the past six billion to 10 billion years.

    How could a fairly similar analysis, just using different data,
    produce such a radical discrepancy? As yet the answer is unknown. The
    data from these groups are of excellent quality, but their samples are
    substantially smaller than ours and do not go as far back in time. The
    Chand analysis did not fully assess all the experimental and
    systematic errors--and, being based on a simplified version of the
    many-multiplet method, might have introduced new ones of its own.

    One prominent astrophysicist, John Bahcall of Princeton, has
    criticized the many-multiplet method itself, but the problems he has
    identified fall into the category of random uncertainties, which
    should wash out in a large sample. He and his colleagues, as well as a
    team led by Jeffrey Newman of Lawrence Berkeley National Laboratory,
    have looked at emission lines rather than absorption lines. So far
    this approach is much less precise, but in the future it may yield
    useful constraints.

    Reforming the Laws

    If our findings prove to be right, the consequences are enormous,
    though only partially explored. Until quite recently, all attempts to
    evaluate what happens to the universe if the fine-structure constant
    changes were unsatisfactory. They amounted to nothing more than
    assuming that [alpha.gif] became a variable in the same formulas that
    had been derived assuming it is a constant. This is a dubious
    practice. If [alpha.gif] varies, then its effects must conserve energy
    and momentum, and they must influence the gravitational field in the
    universe. In 1982 Jacob D. Bekenstein of the Hebrew University of
    Jerusalem was the first to generalize the laws of electromagnetism to
    handle inconstant constants rigorously. The theory elevates
    [alpha.gif] from a mere number to a so-called scalar field, a dynamic
    ingredient of nature. His theory did not include gravity, however.
    Four years ago one of us (Barrow), with Håvard Sandvik and João
    Magueijo of Imperial College London, extended it to do so.

    This theory makes appealingly simple predictions. Variations in
    [alpha.gif] of a few parts per million should have a completely
    negligible effect on the expansion of the universe. That is because
    electromagnetism is much weaker than gravity on cosmic scales. But
    although changes in the fine-structure constant do not affect the
    expansion of the universe significantly, the expansion affects
    [alpha.gif] . Changes to [alpha.gif] are driven by imbalances between
    the electric field energy and magnetic field energy. During the first
    tens of thousands of years of cosmic history, radiation dominated over
    charged particles and kept the electric and magnetic fields in
    balance. As the universe expanded, radiation thinned out, and matter
    became the dominant constituent of the cosmos. The electric and
    magnetic energies became unequal, and [alpha.gif] started to increase
    very slowly, growing as the logarithm of time. About six billion years
    ago dark energy took over and accelerated the expansion, making it
    difficult for all physical influences to propagate through space. So
    [alpha.gif] became nearly constant again.

    This predicted pattern is consistent with our observations. The quasar
    spectral lines represent the matter-dominated period of cosmic
    history, when [alpha.gif] was increasing. The laboratory and Oklo
    results fall in the dark-energy-dominated period, during which has
    been constant. The continued study of the effect of changing
    [alpha.gif] on radioactive elements in meteorites is particularly
    interesting, because it probes the transition between these two
    periods.

    Alpha Is Just the Beginning

    Any theory worthy of consideration does not merely reproduce
    observations; it must make novel predictions. The above theory
    suggests that varying the fine-structure constant makes objects fall
    differently. Galileo predicted that bodies in a vacuum fall at the
    same rate no matter what they are made of--an idea known as the weak
    equivalence principle, famously demonstrated when Apollo 15 astronaut
    David Scott dropped a feather and a hammer and saw them hit the lunar
    dirt at the same time. But if [alpha.gif] varies, that principle no
    longer holds exactly. The variations generate a force on all charged
    particles. The more protons an atom has in its nucleus, the more
    strongly it will feel this force. If our quasar observations are
    correct, then the accelerations of different materials differ by about
    one part in 10^14--too small to see in the laboratory by a factor of
    about 100 but large enough to show up in planned missions such as STEP
    (space-based test of the equivalence principle).

    There is a last twist to the story. Previous studies of [alpha.gif]
    neglected to include one vital consideration: the lumpiness of the
    universe. Like all galaxies, our Milky Way is about a million times
    denser than the cosmic average, so it is not expanding along with the
    universe. In 2003 Barrow and David F. Mota of Cambridge calculated
    that [alpha.gif] may behave differently within the galaxy than inside
    emptier regions of space. Once a young galaxy condenses and relaxes
    into gravitational equilibrium, [alpha.gif] nearly stops changing
    inside it but keeps on changing outside. Thus, the terrestrial
    experiments that probe the constancy of [alpha.gif] suffer from a
    selection bias. We need to study this effect more to see how it would
    affect the tests of the weak equivalence principle. No spatial
    variations of [alpha.gif] have yet been seen. Based on the uniformity
    of the cosmic microwave background radiation, Barrow recently showed
    that [alpha.gif] does not vary by more than one part in 10^8 between
    regions separated by 10 degrees on the sky.

    So where does this flurry of activity leave science as far as
    [alpha.gif] is concerned? We await new data and new analyses to
    confirm or disprove that varies at the level claimed. Researchers
    focus on [alpha.gif] , over the other constants of nature, simply
    because its effects are more readily seen. If [alpha.gif] is
    susceptible to change, however, other constants should vary as well,
    making the inner workings of nature more fickle than scientists ever
    suspected.

    The constants are a tantalizing mystery. Every equation of physics is
    filled with them, and they seem so prosaic that people tend to forget
    how unaccountable their values are. Their origin is bound up with some
    of the grandest questions of modern science, from the unification of
    physics to the expansion of the universe. They may be the superficial
    shadow of a structure larger and more complex than the
    three-dimensional universe we witness around us. Determining whether
    constants are truly constant is only the first step on a path that
    leads to a deeper and wider appreciation of that ultimate vista.


More information about the paleopsych mailing list