[Paleopsych] Review of Richard Swinburne, ed., Bayes's Theorem

Premise Checker checker at panix.com
Fri Jul 1 17:38:40 UTC 2005

Bayes's Theorem (Proceedings of the British Academy, vol. 113), edited by
Richard Swinburne, Oxford University Press, 2002, 160 pages

Reviewed by Paul Anand, The Open University and Health Economics Research
Centre, University of Oxford

Economics and Philosophy (2005), 21:139-142 Cambridge University Press
DOI: 10.1017/S026626710422051X

This short collection of essays celebrates the 200th anniversary of
Bayes's Theorem, famous or notorious depending on one's perspective, as
the basis for a non-classical approach to statistical inference. Given the
steady rise of Bayesianism in econometric and related statistical work, a
volume – even one by philosophers – devoted to the theorem responsible
should be of considerable interest to many scientists, economists and
econometricians included. Comprising four papers based on presentations
given to a British Academy symposium, an additional article by David
Miller, a biographical note by G. A. Barnard first published by Biometrica
in 1958 and a version of Reverand Thomas Bayes's original essay presented
posthumously by Richard Price to the Royal Society in 1763, the collection
highlights the existence of a small (and important) body of work that
continues to examine conceptual issues in the foundations of statistics.
In this review, I shall make brief comments on contributions but say most
about the papers by Sober and Howson.

In a substantial introduction (chapter 1), Richard Swinburn locates
Bayes's Theorem in a world that permits many concepts of probability. He
begins with some preliminary remarks on the meaning of probability and a
distinction between logical or evidential probability on the one hand, and
statistical probability on the other, due to Carnap. He offers a summary
of some probability axioms stated as relating to classes first, and then
to propositions, and though he says little about the difficulties that are
said to follow from the latter approach, he provides a simple account of
the Dutch Book argument claiming that it is strongest when applied to bets
that take place simultaneously (a point that parallels a similar issue in
the literature on rationality and intransitive preference – see, for
example, Anand (1993)). The introduction then develops as a thesis about
limits to the justification of prior probabilities: only a priori
criteria, including the concept of simplicity, can justify a world view in
which certain (probability affecting) factors operate everywhere, so it is
maintained. It may be confusing to have an editor who claims to take a
line different to that of his contributors (and all of them at that) on
the importance of a priori criteria but the disparity is not one that
seems to interfere with the analysis that follows. The essays themselves
begin with a chapter by Elliot Sober whose title ‘Bayesianism – its Scope
and Limits’ indicates, precisely in my view, how we should think of
questions concerning Bayesian inference. Sober's description of the issues
is clear though it might have benefited from a discussion of the way in
which Bayesian inference is actually used by advocates of this approach to
inference. (The later chapter by Philip Dawid, a statistician, fills this
gap.) Nonetheless, the difficulties faced by a version of Bayesianism
based on priors grounded in insufficient reason, and the shift to a
subjective approach which fails the objective needs of scientific method,
are well made. These observations leave open the possibility that
Bayesianism with subjective priors might be valid in decision theory even
if it were not useful for scientific inference – a position that seems
consistent with Sober's position but one which awaits justification.

Sober's discussion proceeds to an examination of likelihoodism – an
emphasis on prob (observation/hypothesis) as opposed to probabilistic
approaches which emphasise prob (hypothesis/observation) – which he uses
as a foil, ultimately against, Bayesianism. The analysis begins by noting
that likelihoods are “often more objective than prior probabilities,”
notes an absurd consequence of the likelihood approach and goes on to
argue that what likelihoodism really provides is an account of support for
a hypothesis, rather than a measure of its overall plausibility. The
discussion is interesting but is linked to statistical inference in
biological applications in such a way that many economists would,
unfortunately, not find it easy to draw lessons from for their own work.
However, the same cannot be said for remarks designed, successfully in my
view, to interest readers in Akaike's (1973) framework for (econometric)
model selection which aims at finding models that are predictively
accurate but not necessarily true. Anyone who might use empirical evidence
could profitably read this section which casts Akaike's approach as an
alternative framework to Bayesianism. The fact that it penalises less
simple models may well be a significant advantage over Bayesianism, but
the claim that this can be justified on principled grounds remains to be
proven. At least from this discussion (which its author allows is not
comprehensive), it seems that Akaike's approach to predictive accuracy
parallels the move from R-squared to adjusted R-squared statistics.
However, just because Akaike's statistic makes a deduction for parameters
used and calls the result an unbiased estimate of predictive accuracy does
not, of itself, tell us that simplicity is, on conceptual grounds,
epistemically relevant, a point echoed in remarks by the following

In chapter 3, Colin Howson provides a substantial and wide-ranging essay
in which he argues, essentially, for what he calls the ‘Second Bayesian
Theory’ (SBT) by which he appears to mean the probabilistic component of
theories by Ramsey and de Finetti. (Economists normally refer to this as
the theory of subjective probability and some may not be aware of but want
to consult Howson and Urbach's (1993 ) comprehensive and witty
introduction to the literature of which the chapter is part.) This paper
is divided into a longer part that surveys, over several subsections, some
of the background followed by a shorter, more technical and focused
discussion of issues surrounding a claim about the logical foundations of
the probability calculus. The survey section deals with topics that
include Fisher and significance tests, Lindley's paradox, likelihood,
priors and simplicity with the aim of raising concerns that Bayesianism
can resolve which the classical approach and its variants may not. The
second, shorter part of Howson's essay is devoted to a discussion,
centered around his previously published theorem, of the consistency of
SBT. (This is difficult as it brings together ideas from optimisation and
logic and then does a lot of work using non-technical language.)
Understanding the relations between logic and probability and the logical
basis of a probabilistic calculus are crucial issues touched on here
though I believe that further comments would have helped the reader assess
the project. Howson shows that SBT is an “authentic logic” but given that
SBT (from de Finetti on) is an axiomatic theory anyway, I wonder how
Howson's arguments for consistency relate to and compare with the claim
that SBT is normatively desirable on account of its axioms. One might also
ask if being an authentic logic would turn out to distinguish between
alternatives to SBT – we now know that a wide range of non-expected,
intransitive utility theories can be formalised and normatively justified
so it would be useful to know how much significance we should attribute to
being an authentic logic. Put differently, if the “probability axioms are
the complete logic of probable inference” as Howson states, what, if
anything, does this tell us about the merits of alternative concepts of
credence or uncertainty? This is not a criticism of Howson but it is a
reminder that the revolution in the foundations of decision theory over
the past 30 years means that nothing about the theory of choice
(probability included) can be taken for granted. Of the remaining three
contributions, it is fair to say that that by Dawid is the most applied
and decision theoretic. His discussions of legal decisions provides a good
(if too rare) mix of application and foundational issues that could be
useful for those who teach foundations of decision theory. There is a
tendency for some Bayesians to propose the approach as a panacea for a
range of inference problems that require different concepts of credence
(rather than meanings of probability) and there is some evidence of that
tendency here too. Nonetheless, the framework for comparing approaches
Dawid develops has been nicely honed and repays reading whatever one's own

In contrast, John Earman's chapter has a more historical flavour (unlike
his 1992 Bayes or Bust ) taking, as it does, themes that tend to interest
Bayesians and examining them in the context of Hume's analysis of evidence
for miracles. There are some potential points of contact with modern
concerns though these are not Earman's primary focus and the demolition
job he performs is likely to be of most interest to Hume scholars. The
last chapter in this collection, David Miller's discussion of the
propensity view seems interesting in its own right though I did feel there
was a question as to whether the paper is really sufficiently relevant to
the rest of the debate to merit inclusion. That quibble apart, this book
provides researchers on the edge of the field with a sense of some key
current concerns as well as a useful reference point for those wanting to
explore the foundations of statistics (or decision theory) in more depth.


Anand, P. 1993. Foundations of rational choice under risk. Oxford
University Press (reprint 2002)

Earman, J. 1992. Bayes or bust. MIT Press

Howson, C. and P. Urbach. 1993. Scientific reasoning: the bayesian
approach. Open Court (2nd edition)

More information about the paleopsych mailing list