# [extropy-chat] Coin Flip Paradox (was Randomness)

The Avantguardian avantguardian2020 at yahoo.com
Sun Jan 28 23:05:39 UTC 2007

```--- gts <gts_2000 at yahoo.com> wrote:

> Rafal, this excerpt below goes directly to the
> question we were discussing
> of whether a sequence of coin-flips should be
> considered completely random
> even when the coin is heavily weighted.
>
> As below the conventional answer is yes, because as
> I indicated yesterday
> the sequence satisfies the axiom or conditions of
> randomness.

because I am completely disattisified with the Axiom
of Randomness as so stated.

> ====
> Axiom of Randomness: the limiting relative frequency
> of each attribute in
> a collective C is the same in any infinite
> subsequence of C which is
> determined by a place selection.

The problem lies in that it caters to the frequentist
notion that probability of a specific outcome is the
limiting relative frequency of that outcome in an
infinite sequence of possible outcomes. While this may
seem intuitive, it does not hold up mathematically.

To illustrate why, I invoke what I call the coin-flip
what is the distribution of the observed frequencies
of heads in n flips of a fair (i.e. unweighted) coin?

Now mind you an unweighted coin is the simplest
generator of randomness that we know of and since we
were school children we learned that the probability
of getting heads is 0.5 exactly.

The paradox arises when one compares the *known*
probability of getting h heads with the relative
frequency of heads in a progressively longer sequence
of n flips of the coin.

To illustrate this, first think of flipping a coin
once. The possible outcomes for the *observed*
frequency of heads in this stuation is either 1 or 0.
It is in fact *impossible* in an odd number of coin
flips to achieve an *observed* frequency of 0.5 as the
text books would seem suggest. Since half of all
possible values n are odd, with no further analysis,
one could expect to *observe* a frequency 0.5 of heads
for n flips of a coin at *most* 50% of the time. In
fact the only value of n where one can get a frequency
of 0.5 for heads at least half the time is when n=2.

P(HH or TT)=0.5 and P(HT or TH)=0.5

For higher values of n, the situation gets steadily
worse. For n=3, it is again *impossible* to observe
heads at a 0.5 frequency. For n=4, we find that we
*observe* a 0.5 frequency of heads 3/8ths (0.375) of
the time. In fact for n=4 it is more likely that you
will observe (3H and 1T) or (1H and 3T) at a combined
frequency of (0.5). The same is true for the sexes of
children in four children families. Half the families
will have three of one sex and one of the other.
3/8ths of the families will have two of each sex. And
a mere 1/8th of the families will have all of one sex
or the other.

For high *even* values of n, the situation becomes
rather dismal. For n=1000, one can only expect to
*observe* a frequency of 0.5 for heads approximately
2.5% of the time. So contrary to intuition, the more
times you flip the coin, the *less* likely you are to
measure a frequency of heads equal to the *known*
probability of getting getting heads on a coin flip in
the first place. That in a nutshell is the Coin Flip
Paradox and why I am a Bayesian.

> The probability of an attribute A, relative to a
> collective C, is then
> defined as the limiting relative frequency of A in
> C. Note that a constant
> sequence such as H, H, H, â€¦, in which the limiting
> relative frequency is
> the same in any infinite subsequence, trivially
> satisfies the axiom of
> randomness. This puts some strain on the terminology
> -- offhand, such
> sequences appear to be as non-random as they come --
> although to be sure
> it is desirable that probabilities be assigned even
> in such sequences.

On a more technical note for reasons given above, the
probability of a random outcome is NOT the limiting
frequency. It differs significantly from the
mathematical definition of a limit as it is defined
for a function.

A true limit can never be reached with a finite input
domain of a function. A frequency equal to the
probability however *can* be reached in a finite
sequence of random events (e.g. with coin flips, n=2
as above) however it is increasingly rare as the
sequence grows in size becoming zero for an infinite
sequence. If probability were a limiting frequency on
the other hand then it would be mathematically
*guanteed* that the frequency would be equal to the
probability at infinite n.

Futhermore limits are monotonic in the sense that for
a true limit the observed frequency of an outcome for
a higher n will always be closer to the limit than for
a lower n. Again in random sequences, this is not the
case. You can *observe* frequencies of random events
that temporarily diverge from the probability of such
events even for a long sequence of such events.

For example you could flip a coin six times and get 3
heads and 3 tails and then get a run of heads on your
next six tosses. Thus the *observed* frequency is
moving away from the known probability. Mathematical
limits do not behave this way. Thus the Axiom of
Randomness is BOGUS. Unfortunately I have yet to come
up with a better definition myself.

> Entropy and randomness are very closely related
> ideas, but I'm inclined to
> keep them apart.

So am I for the time being.

Stuart LaForge
alt email: stuart"AT"ucla.edu

"If we all did the things we are capable of doing, we would literally astound ourselves." - Thomas Edison

____________________________________________________________________________________
Looking for earth-friendly autos?
Browse Top Cars by "Green Rating" at Yahoo! Autos' Green Center.
http://autos.yahoo.com/green_center/

```