[Paleopsych] Notices of the AMS: Some of What Mathematicians Do by Martin H. Krieger

Premise Checker checker at panix.com
Wed Oct 20 19:25:07 UTC 2004


Some of What Mathematicians Do by Martin H. Krieger
NOTICES OF THE AMS (American Mathematical Society) VOLUME 51, NUMBER 10
http://www.ams.org/notices/200410/comm-krieger.pdf (converted to txt by
me)

Martin H. Krieger is professor of planning at the University of Southern
California. His email address is krieger at usc.edu.

Whether it be at a party or at a tavern or while being examined by a
physician, on announcing that you are a mathematician, you are likely to
be greeted with comments about your companion's failure in high school
math, or a request for a brief account of the proof of Fermat's Last
Theorem, or perhaps an offer of a counterexample to the Four Color
Theorem. Your parents, your friends and relatives, airplane seatmates,
or your dean or provost are not likely to be mathematicians, and they
too would like to know what you do, preferably in bite-sized pieces.

Might we provide an everyday description that has sufficient technical
detail so that a mathematician would recognize the work as real research
mathematics? I suggest that if we think of mathematical work as showing
that what might seem arbitrary is actually necessary, as analyzing
everyday notions, as calculation, and as analogizing--using rich
examples of mathematical work itself, we might be able to say a bit more
about some of what mathematicians do. None of these descriptions are
easy, but I think they connect better with the work of other people, so
that they might see our work and their own as having some shared
features.

Conventions

Mathematicians make certain notions conventional. What might seem
arbitrary is shown to be in effect necessary, at least within a wide
enough range of situations. For example, means and variances were once
taken merely as ways of "combining observations", to use a term of art
of two hundred years ago. There were other ways, including medians and
average absolute deviations (Sxi -x/N). But through the central limit
theorem, for example, the variance became entrenched as a good measure
of the width of a distribution for various different kinds of more or
less identically distributed independent random variables. Moreover, it
was easy to depict such statistics in a Euclidean space of observations,
the various formulas being Pythagorean theorems with Euclidean
distances. And if one used a large electromechanical calculator, it was
not hard to set up the calculation so that one could calculate a sum of
the squares of xi and yi and a sum of xiyi. In the law of the iterated
logarithm, Khinchin provided an estimate of fluctuations that would not
be readily accounted for by gaussian behavior, so even exceptional
behavior fit under this regimen.

Variances turned out to be good measures of the kinds of noise and
dissipation physicists encountered, and Einstein's work on fluctuations
(1905, 1917) entrenched variances as the measure of choice. It also
turned out that variances were good measures of the risk involved in
financial markets, and the calculus of Lévy and Itô (where, in effect,
dx is replaced by vdx) became the bread and butter of finance
professors.

As for exceptions to means and variances, Lévy showed that the crucial
fact was the asymptotic norming constant, the vN that appears in the
central limit theorem: that is, N1/a, here a =2. For a need not be 2 but
could be other numbers for other distributions ("distributions without
variance", that is, with infinite variance), which still scaled
asymptotically, such as the world of fractals. However, if the variance
is finite, then the only game in town is the gaussian. The deep idea
turns out to be asymptotic approximation and scaling, that N1/a. And
this is seen in modern results related to random matrices and prime
number distributions, where the norming constant can be N1/6, for
example.

What is made conventional here is the gaussian, characterized by its
mean and variance, and its being the asymptotic limit of sums of nice
random variables. And that is made clear by the description of its
exceptions. Although means and variances might well be arbitrary, they
are demonstrably the right statistics ("necessary") for a wide range of
cases.

Nowadays, statisticians are realizing that for actual data sets, often
infected by wild and outlying data, one needs statistical methods that
are "robust" and "resistant", not a strong point of means and variances.
For a wide range of new cases, means and variances will no longer be
conventions, and presumably new statistics are proven to be "necessary"
and become the reigning conventions.

Mathematicians affirm that these conventions are not arbitrary. They are
well grounded in mathematical practice and theory.

Analyzing Everyday Notions

Mathematicians formally analyze everyday notions. Topology developed as
a way of understanding nearbyness, connectivity, and networks. It turned
out that the key idea was continuity of mappings and how that continuity
was affected by other transformations. For continuity preserved
nearbyness, connectivity, and networks. Of course, this demanded a
number of conceptual and mathematical discoveries. One great discovery
was the subtleties of continuity, uniform vs. pointwise, for example. A
second discovery was the fact that one might represent continuity and
neighborhoods in terms of mappings: if the neighborhood of a point was
mapped into an open set, that neighborhood itself was open, if the
mapping was continuous. A third discovery was that networks could be
characterized in terms of how they decomposed into simpler networks and
that characterization would be preserved under continuous mappings.
Moreover, a space might well be approximated by a skeletal framework,
and a study of that framework would tell us about the space. A fourth
discovery was that that decomposition sequence had a natural algebraic
analog in commutative algebra. And a fifth discovery was that the
algebraic decomposition had a natural analog with derivatives and second
derivatives (Stokes's and Green's theorems and Gibbs's vector calculus),
again the world of continuity.

As a consequence of this analysis, it was realized that there are many
different kinds of nearbyness and many different topologies for a space,
yet they might share important features. Functions came to be understood
as mappings, in terms of what they did. And the transcendental realm
turned out to be deeply involved with the algebraic realm. That analysis
of everyday notions led to powerful technologies for analyzing
connectivity and networks, techniques vital to current society. Those
technologies are grounded in the formal mathematical analysis.

Calculation

Perhaps "proofs should be driven not by calculation but solely by
ideas", as Hilbert averred in what he called Riemann's Principle. But
some of the time, if not often, mathematicians have to
calcu-late--doggedly and lengthily--in order to get interesting results.
In some future time, knowing the solution, other mathematicians may well
be able to provide a one-line proof driven solely by ideas, plus a great
deal of mathematical superstructure built up in the intervening period
of time. Or, in fact, lengthy proof and calculation are unavoidable, and
delicate arguments involving hairy technology are the only way to go.
The mathematician's achievement is, first of all, to actually follow
through on that long and complex calculation and come to a useful
conclusion, and, second, to present that calculation so that it is
mildly illuminating. As we shall see, such a presentation involves
matters of structure, organizing the whole; strategy, being able to tell
a story about how it all holds together; and tactics, being able to do
what needs to be done to get on with the next main step of the proof.

The first proof, by Dyson and Lenard (1967-1968), of the stability of
matter--that bulk matter, held together by electrical forces of
electrons and nuclei, won't collapse (then to explode)--is considered
one of these long and elaborate calculations. What one has to prove is
that the binding energy of bulk matter per nucleus is bounded from below
by a negative constant, -E*. The proof begins with an idea: an insight
by Onsager (1939) about how to incorporate the screening of positively
charged nuclei by negatively charged electrons. But the actual
calculation would seem to involve a number of preliminary theorems and a
goodly number of lemmas, all of which might seem a bit distant from the
main problem. Actually, many of the preliminary theorems motivate the
proof and indicate what is needed if a proof is to go through. And the
lemmas might be seen as lemmas hanging from a tree of theorems or troops
lined up to do particular work. As in many such calculations, the result
almost miraculously appears at the end. And in this case the
proportionality constant is about 1014 larger in absolute value than it
need be.

A few years later, Lieb and Thirring (1975) were able to figure out how
to efficiently use the crucial physics of the problem (Onsager's
screening, and also that the electrons are fermions and are represented
by antisymmetric wave functions). As a consequence, the proof was now
about ideas, involved comparatively little calculation, and could be
readily seen in outline, and the proportionality constant was about 10
rather than 1014. Their crucial move was to employ the Thomas-Fermi
model of an atom: the many electrons in an atom exist in a field due to
their own charges (as well as that of the nucleus), and hence one seeks
a self-consistent field.

Dyson and Lenard had all these ideas except for Thomas-Fermi. But in
their pioneering proof, getting to the endpoint was avowedly more
important than efficiency or controlling the size of the proportionality
constant, -E*. Theirs was a first proof of a fundamental fact of our
world. By the way, in retrospect, the Dyson-Lenard proof is rather less
long than it once appeared, its various manipulations along the way
rather more rich with meaning.

Over the next decades a variety of rigorous proofs were provided of
various fundamental facts about our world, many of which proofs are
lengthy and complex and involve much calculation.

(1) Thermodynamics. One would like to be able to estimate the binding
energy of bulk matter, the energy required to break it up into isolated
atoms, as being proportional to the number of atoms. Such an estimate
justifies thermodynamics, with its separation of intensive variables
(such as temperature) and extensive variables (such as volume or number
of particles). In a remarkable and lengthy proof, Lebowitz and Lieb
(1972) provided a calculation of the asymptotic form of the binding
energy of bulk matter, E ~-AN, where N is the number of atoms--just the
required form. Along the way, they employed the Dyson-Lenard result.

In all of these calculations, one technical problem is to figure out how
to break up space into balls or boxes, fitting the atoms into those
containers ("balls into boxes"). For example, Lebowitz and Lieb develop
a Swiss-cheese decomposition: smaller balls fit into the interstices
between larger balls.

(2) A gas of atoms. One would like to prove that at a suitable
temperature and pressure, atoms form, and one has a gas of such atoms.
Charles Fefferman (1983-1986) provides the proof with all of its
"gruesome details", as he refers to the latter endeavor. First, he
employs a technology he developed for solving partial differential
equations--what he called "the uncertainty principle", the idea that the
phase space of x and d/dx might be divided into suitably shaped boxes on
which the differential equation is trivial--and then fill balls of phase
space with these boxes, fitting "boxes into balls". Along the way, he
redoes the Lieb-Thirring proof.

What is notable is his technical definition of an atom and, later, of a
gas of atoms, a mathematically precise way of describing a physical
state, one that would allow him to make mathematical progress on the
problem. What is remarkable, and this is true for much of Fefferman's
work, is his capacity to push through a lengthy calculation.

In order to complete the proof of "the atomic nature of matter" (that a
gas of atoms forms), Fefferman then needs an even better estimate for
the proportionality constant for the stability of matter than was
provided by Lieb and Thirring, and with de la Llave and Trotter he
provides a lengthy proof and an exact numerical calculation for E*.
(Lieb and his followers have provided another route to such better
constants.) So far, it should be noted, the calculated E*is still about
two times too big for Fefferman's purposes and given what we expect.

(3) An isolated atom. Finally, one would like to estimate the ground
state energy of a large isolated atom. The hydrogen atom's proverbial
13.6 elec-tron-volts is the only calculation one might make in closed
form (one of the first calculations in a quantum mechanics course). For
larger atoms one must use approximations in which the errors are not in
general rigorously known. In a series of calculations, some rigorous,
some merely physical, by Lieb and Simon, Scott, Dirac, and Schwinger, a
good idea of the asymptotic formula for the ground state energy in terms
of Z, the atomic charge, is given in terms of a series in Z1/3:
Z7/3,Z6/3,Z5/3. What Fefferman and Seco (1990-1996) provide in something
like 800 pages of proof is a rigorous derivation of this formula with a
rigorous estimate of its error, O(Z5/3-1/a). Whole new technologies for
partial differential equations are developed along the way, and even the
paper that brings these all together is almost two hundred pages in
length. Their achievement is again the ability to divide up the problem
into tractable parts, to orchestrate the parts so that they work
together, and to be able to tell a story of the proof (in this case, in
fourteen pages). There have been subsequent simplifications for parts of
the Fefferman-Seco derivation, but much of the calculation remains
lengthy and complicated. And Córdoba, Fefferman, and Seco have found the
next term in the asymptotic expansion.

Lengthy calculation demands enormous technical skill, courage, and
insight and usually demands herculean inventions along the way. But
sometimes it is the only way to make progress on a problem. I have
chosen examples in which the lengthy calculations also lead to analyses
of everyday notions, such as a gas of atoms.

Analogy

Some time ago, Pólya showed that analogy plays a vital role in
mathematical work. Sometimes those analogies are provably true, such as
the analogy between ideals and varieties: polynomials and their
properties, considered as algebraic objects, and the graphs of those
polynomials and their properties, considered as geometric objects. At
other times, the analogies are not provable but provide for ongoing
research programs for hundreds of years. Here I want to describe a
syzygy, an analogy of analogies, between mathematical work and work in
mathematical physics. What the physicists find, the mathematicians would
expect, although the mathematicians could never have predicted such an
analogy in the physical realm without the physicists' work.

For the mathematicians, I am thinking of the
Riemann-Dedekind/Weber-Weil-Langlands analogy of analysis, algebra, and
arithmetic. I will call it the Dedekind-Weil analogy, for short.
Dedekind and Weber tried to derive Riemann's results concerning the
transcendental realm (that is, referring to the realm of the
continuous)--think here of Riemann surfaces and the Riemann-Roch
theorem--using rigorous algebraic methods with no intuitions about
continuity. Again, could there be a useful analogy between curves or
surfaces and algebra? They were guided by what was known algebraically
about numbers (number theory); in fact, they were able to translate
those concepts and results to the realm of polynomials, and so were able
to algebraicize Riemann's transcendental point of view. Subsequently,
Hilbert and Weil and others extended the analogy.

André Weil describes the analogy in a particularly poignant way in a
long letter he wrote from prison to his sister, Simone, in 1940. It is a
remarkable document, combining a rich mixture of mathematics, a notional
history of the analogy, reflections on how Weil himself does
mathematics, and analogies of the interchange among the moments of the
analogy to incest and war. I urge the reader to get hold of it (either
in the original French in the first volume of Weil's Collected Papers,
or in English translation in my Doing Mathematics).

Weil refers to three columns, in analogy with the Rosetta Stone's three
languages and their arrangement, and the task is to "learn to read
Riemannian". Given an ability to read one column, can you find its
translation in the other columns? In the first column are Riemann's
transcendental results and, more generally, work in analysis and
geometry. In the second column is algebra, say polynomials with
coefficients in the complex numbers or in a finite field. And in the
third column is arithmetic or number theory and combinatorial
properties. So, for example: (Column 3) Arithmetically, the zeta
function packages the prime numbers. (2) Algebraically, its Mellin
transform (a Fourier-like transform) is the theta function, originally
found by Fourier in solving the heat equation. Theta has wonderful
algebraic properties, such as automorphy (transformations of the
function, that is, of its argument, can be expressed in terms of the
function itself) and a functional equation that defines it. And (1),
analytically, the spectrum of the zeta function (its zeros) is rich with
information about the prime numbers. A simple example of the threefold
analogy is found in the sine function: its series expansion packages the
factorials of the odd numbers; sin Mx is expressible in terms of the
trigonometric functions themselves (say, sin x and cos x); and the
periodicity of the sine function (its spectrum) more or less defines it.
Weil points out that the analogy continues to be productive, his later
having proved the Riemann hypothesis in the algebraic column being a
case in point.

In the twentieth century, mathematicians discovered that attaching group
representations (or systems of matrices) to objects would often lead to
progress in understanding those objects. Lang-lands's very great
contribution (1960s ff) was to suggest, following Emil Artin, that
attaching a group representation to the algebraic or automorphy column
would turn out to be very productive for understanding the arithmetic
column. The idea is to extend the analogy of theta functions to zeta
functions into a much more complicated realm. Moreover, what might be
impossibly difficult to prove from the point of view of one column is
readily built in in another, much as theta's automorphy and functional
equation leads to zeta's functional equation.

While the mathematicians worked at their analogy, physicists were
solving a simple classical model of a ferromagnet using statistical
mechanics: the Ising model in two dimensions, up-down spins arranged on
a, say, rectangular lattice. The spins' interaction is local and
simplified. The first exact solution was provided by Onsager in 1944,
using a combination of Clifford or quaternion algebra and elliptic
functions. Over the subsequent sixty years, physicists have provided
many different solutions of the Ising model. (One solution refers to
itself as the "399th solution".) Of course, they all get the same result
for the partition function (in effect, the zeta function for this
problem). When we examine the solutions, we discover that we might group
the solutions into those that are arithmetic and combinatorial, those
that are algebraic and automorphic, and those that are analytic or
transcendental concerned with the zeros of the partition function.
Moreover, from the initial solutions of the Ising model by Kramers and
Wannier and by Montroll (1941), matrices played a crucial role in many
of the solutions. They were in fact group representations, although they
were
not taken as such. They were taken to be matrices that conveniently did
the combinatorics, and it was the algebraic properties of those matrices
that allowed for the Onsager solution. No one worried much about what
those matrices were a group representation of, although Onsager surely
had many insights. The trace of those transfer matrices was the
partition function of interest. Moreover, once again, there were
functional equations that allowed for the solution for the partition
function, and there were the scaling symmetries and automorphies
characteristic of theta or elliptic functions. The latter were
eventually canonized in the renormalization group techniques of Wilson
(1960s, 1970s).

Parenthetically, I should note that Onsager's original paper might well
be another candidate for a lengthy calculation. Subsequent calculations
of asymptotic properties of the Ising model by Wu and McCoy (1966 ff)
and collaborators are impressive for their length and complexity and for
the courage needed to carry them through. What is striking is that at
the end of one such calculation, the Painlevé transcendents appear, and
that appearance has since become significant for much of contemporary
mathematics and mathematical physics.

It would seem that there are two analogies here. The Dedekind-Weil
analogy has been worked on as an analogy for 150+ years, most recently
in its connection with representation theory in the Lang-lands Program.
The physicists have been exactly solving the Ising model in two
dimensions for more than sixty years and have produced a wide variety of
solutions, employing what are in effect group representations from the
beginning. Those various solutions would seem to be naturally described
and classified using the categories provided by the mathematicians. The
analogy the mathematicians seek to develop generically is developed and
proven in its particular realm as a matter of course by the everyday
work of the physicists. What the mathematicians seek, the physicists by
the way provide an example of. The multiplicity of the physicists'
solutions is given meaning and order by the mathematicians' hard-won
concepts. I am unsure whether the physicists' analogy is provably the
same as the mathematicians'. But surely the Dedekind-Weil analogy
provides a way of thinking of diverse phenomena as being naturally
connected, rather than their being merely many ways of solving a
problem.

These analogies and the analogy between them (the syzygy) organize an
enormous amount of information, suggest facts in one realm that might be
true in another, and illuminate concepts among the columns and the
analogies.

What Do Mathematicians Do?

Words such as convention, analyzing everyday notions, calculation, and
analogy might be used to describe activities other than mathematics. And
it is just in this sense that we might give outsiders a sense of what
mathematicians do. At the same time, those notions have very specific
meanings for mathematical work. And it is just in this latter sense that
we might describe mathematics to ourselves. The shared set of terms
allows us to connect our highly technical and often esoteric work with
the work of others. Mathematicians show why some ways of thinking of the
world are the right ways, they explore our everyday intuitions and make
them rather more precise, they do long and tortuous calculations in
order to reveal the consequences of their theories, and they explore
analogies of one theory with others in order to find out the truths of
the mathematical world.

I would also claim that, in a very specific sense, mathematical work is
a form of philosophical analysis. The mathematicians and mathematical
physicists find out through their rigorous proofs just which features of
the world are necessary if we are to have the kind of world we do have.
For example, if there is to be stability of matter, electrons must be
fermions. The mathematicians show just what we mean by everyday notions
such as an average or nearbyness. And mathematics connects diverse
phenomena through encompassing theories and speculative analogies.

So when you are asked, What do mathematicians do?, you can say: I like
to think we are just like lawyers or philosophers who explore the
meanings of our everyday concepts, we are like inventors who employ
analogies to solve problems, and we are like marketers who try to
convince others they ought to think "Kodak" when they hear "photography"
(or the competition, who try to convince them that they ought to think
"Fuji"). Moreover, some of the time, our work is not unlike solving a
two-thousand-piece jigsaw puzzle, all in one color. That surely involves
lots of scut work, but also ingenuity along the way in dividing up the
work, sorting the pieces, and knowing that it often makes sense to build
the border first.

Sources

The material in this article is drawn from Martin H. Krieger,
Constitutions of Matter (Chicago: University of Chicago Press, 1996) and
Doing Mathematics (Singapore: World Scientific, 2003). See, especially,
R.P. Langlands, "Representation theory: Its rise and role in number
theory", which originally appeared in Proceedings of the Gibbs Symposium
(Providence: AMS, 1990), but is also available at
http://www.sunsite.ubc.ca/DigitalMathArchive/Langlands/pdf/gibbs-ps.pdf


More information about the paleopsych mailing list