[extropy-chat] Coin Flip Paradox
gts
gts_2000 at yahoo.com
Mon Jan 29 15:51:00 UTC 2007
On Sun, 28 Jan 2007 23:59:59 -0500, The Avantguardian
<avantguardian2020 at yahoo.com> wrote:
>> Both Baysianists and propensity theorists can talk
>> meaningfully about the probability of single or rare events.
>> Frequentists cannot.
>
> Right. Which is a serious discontinuity in their
> theory in my view.
I would not call it a discontinuity -- but it is perhaps a needless
limitation.
Like you I'm inclined to look for something better than frequentism, but
not necessarily for the reasons you're giving. You may be right for the
wrong reasons. :)
Von Mises (main developer of the frequency theory) was first and foremost
an *empiricist*. As such there is something refreshing and honest about
his approach to probability theory, at least to an empirically minded
person like me. To him, probability was a branch of *natural science* --
not a branch of logic or epistemology as so many philosophers of the
subject suppose. He studied frequencies of outcomes and derived laws from
his observations in much the same way that a physicist studies and derives
laws about the frequencies of electromagnetic radiation.
> To put it another way they borrow a
> tool from calculus called a limit and try to define a
> probability by it and it fails.
But as Von Mises argued, other sciences also make use of infinities in
their mathematical abstractions. Why should the science of probability be
prohibited from using them?
> A random sequence should *converge* on anyhypothesized limit as n
> approaches infinity.
It does!
> Instead the measured frequency converges infinitely
> many times on the probability and diverges from that
> same probability just as often.
Though it is true the measured frequency fluctuates, sometimes diverging
and sometimes converging, the divergences decrease in magnitude as n
increases, as the measured frequency converges over-all on the
probability. This can be demonstrated both mathematically and empirically.
For example experiments will show that after 1000 flips, the frequency of
heads might be something like .505, then after 10,000 flips it might be
something like .5001, then after 100,000 flips it might be something like
.500002, with the frequency of heads generally converging on the idealized
value of .500... as n goes hypothetically to infinity. No one disputes
this fact!
Here is Von Mises' Axiom of Convergence in formal terms:
_Let A be an attribute of a collective C, then as n goes to infinity,
m(A)/n exists._
(and m(A)/n is the probability of A in C.)
Here is the Axiom of Convergence in terms of coin flips, (fair or unfair
coin, it makes no difference here):
_Let H be the Heads attribute in a sequence of n flips of a coin, then as
n goes to infinity, m(H)/n exists._
So, for an idealized fair coin, m(H)/n as derived above = the p of Heads =
.5 exactly, and this number exists mathematically.
Philosophers sometimes object to this use of mathematical infinities,
(there is in fact a branch of frequentism called 'finite frequentism' just
to answer those objections), but no one seriously questions the general
usefulness of this axiom as given in its own terms.
Can you imagine a world in which relative frequencies do not converge on
some value consistent with Von Mises' axiom of convergence? I cannot. And
even if such an insane world were imaginable, that world is certainly not
the one in which we live. Von Mises and the frequentists deserve credit
where credit is due.
The question for philosophers of probability is not whether frequencies
converge as Mises observed. It is rather *why* they converge. Subjective
bayesians have no answer to this question any more than do the
frequentists. Propensity theorists however do have an answer.
> I am not saying you *can't* define probability in such
> a fashion, I am saying it is the less useful
> definition of randomness.
I'll answer this point in another post...
-gts
More information about the extropy-chat
mailing list