The Avantguardian avantguardian2020 at yahoo.com
Tue Jan 30 20:17:08 UTC 2007

```--- gts <gts_2000 at yahoo.com> wrote:

> Like you I'm inclined to look for something better
> than frequentism, but
> not necessarily for the reasons you're giving. You
> may be right for the
> wrong reasons. :)

If you really want something better than why are you
defending the frequentist Axiom of Randomness tooth
and nail?

> Von Mises (main developer of the frequency theory)
> was first and foremost
> an *empiricist*. As such there is something
> his approach to probability theory, at least to an
> empirically minded
> person like me.

I have no problem with Von Mises. He was right, he
just didn't know why.

> > To put it another way they borrow a
> > tool from calculus called a limit and try to
> define a
> > probability by it and it fails.
>
> But as Von Mises argued, other sciences also make
> use of infinities in
> their mathematical abstractions. Why should the
> science of probability be
> prohibited from using them?

Yes but they use actual mathematical limits that
display strong convergence which are functions that
approach their limits in a monotonic fashion. What Von
Mises measured was something called weak convergence
and is the result of the Central Limit Theorem and the
Law of Large Numbers which both essentially describe
the same thing: That if you measure the means of
enough samples drawn at random from ANY possible
distribution, the means you measure will themselves
approximate a normal distribution. The mean, or mean
of the means, of this normal distribution of means
will be very close to the mean of the parent
distibution REGARDLESS of its shape.

> Though it is true the measured frequency fluctuates,
> sometimes diverging
> and sometimes converging, the divergences decrease
> in magnitude as n
> increases, as the measured frequency converges
> over-all on the
> probability. This can be demonstrated both
> mathematically and empirically.

Actually it all depends on how you define magnitude.
For example with coin flips, the divergence *relative*
to the number of flips decreases. The *absolute*
divergence however INCREASES. If you flip a coin 4
times times you may get 1 heads and 3 tails. That is a
*relative* divergence of 0.25 from the expectation
value of 0.5. However it is a divergence by only a
single head (the absolute divergence) from the
expectation of getting equal numbers of heads and
tails.

Now if you take the coin and flip it 10000 times you
can quite realistically obtain 5100 heads and 4900
tails. The *relative* divergence of this from the
expectation is only .5100-.5000= .01. The *absolute*
divergence however is 100 more heads than you
expected.

The *absolute* divergence from normalcy rises as the
square root of the number of flips. It is only when
you divide this divergence by the number of flips,
i.e. SQRT(N)/N that you get a convergence at all.

> The question for philosophers of probability is not
> whether frequencies
> converge as Mises observed. It is rather *why* they
> converge. Subjective
> bayesians have no answer to this question any more
> than do the
> frequentists. Propensity theorists however do have

I just told you: The Central Limit Theorem and the Law
of Large Numbers are both alternate descriptions of
the same underlying phenomenon that is essentially a
law of nature. That why they converge although the
convergence itself is only *relative* to the ever
increasing number of trials.

Stuart LaForge
alt email: stuart"AT"ucla.edu

"If we all did the things we are capable of doing, we would literally astound ourselves." - Thomas Edison

____________________________________________________________________________________
Be a PS3 game guru.
Get your game face on with the latest PS3 news and previews at Yahoo! Games.
http://videogames.yahoo.com/platform?platform=120121

```