[ExI] Probability is in the Mind

Amara Graps amara at amara.com
Thu Mar 13 06:12:21 UTC 2008


Jef Allbright:
>[Once more, this time with the link.]
>
>On Wed, Mar 12, 2008 at 6:55 AM, Jef Allbright <jef at 
>jefallbright.net> wrote:
>>  Another excellent and highly applicable post by Eliezer on the
>>   Overcoming Bias blog.
>
><http://www.overcomingbias.com/2008/03/mind-probabilit.html>

I think that I committed the Mind Projection Fallacy last night. My
friend and I exited my work building at precisely the time that the
building's fire alarm went off. I was sure that we had triggered the
alarm when we exited the doors, but that wasn't true. I had nothing to
do with the alarm. The workers in the parking garage below had triggered
the alarm.

Since the extropians archives don't go back to 2002-3, I'll paste
something I wrote to the list a couple of my lifetimes ago.

Amara

-----------------------------September 13. 2002
To: extropians at extropy.org
From: Amara Graps <amara at amara.com>
Subject: Physics and Interpretations (was Postmodernists have nothing 
useful to contribute)


Serafino:
>Bohr wrote (in 'The Unity of Science') " [Complementarity
>should] be seen as a logical expression of our situation
>concerning objective description in this area of experience. The
>realization that the interaction between measuring devices and
>the physical systems forms an integrating part of quantum
>phenomena, has not only revealed an unexpected limitation of the
>mechanistic view of nature which attributes well defined
>properties to the objects themselves, but it has forced us to
>give special attention to the problem of observation when
>ordering the experiences. "

>Bohr also wrote (in Atomic Physics and Human Knowledge, Wiley,
>1959) " ... a subsequent measurement to a certain degree
>deprives the information given by a previous measurement of its
>significance for predicting the future course of phenomena.
>Obviously, these facts not only set a limit to the extent of the
>information obtainable by measurement, but they also set a limit
>to the meaning which we may attribute to such information. We
>meet here in a new light the old truth that in our description
>of nature the purpose is not to disclose the real essence of the
>phenomena but only to track down, so far as it possible,
>relations between the manifold aspects of our experience."

>Note he uses the terms 'information' and 'limit'. They are the very
>essence of the Copenhagen Interpretation.

Yes, in that Bohr's 'Copenhagen Theory' says that even when the QM state
vector gives only probabilities, it is a complete description of
reality in the sense that nothing more can ever be known; not because
of technological limitations, but because of fundamental principles.

But Jaynes seems pretty convinced ('Clearing Up Mysteries,the Original
Goal') about Bohr's way of perceiving physics problems. He says that
persistent in Bohr's writings (which Jaynes calls vague, puzzling,
foglike) is a common logical structure which indicates that Bohr was
never on the ontological level traditional in physics. Always he
discussing _not_ Nature, but our _information_ about Nature, but that
physics at that time did not have the vocabulary for expressing ideas
on that level, so then his words appeared muddy.

About Dirac: I learned (from Jaynes' writings) Dirac was working with
Harald Jeffreys (a Bayesian ...) side by side for a little while at
St. John's College, and he seems to have not realized what Jeffrey's
probability theory could offer, that is, a vehicle for expressing
epistemological notions quantitatively. Jaynes said that if either
Bohr or Dirac understood the work of Jeffreys, the recent history of
theoretical physics might have been very different: they would have
the language and the technical apparatus with which Bohr's ideas could
be stated and worked out precisely without mysticism. Had they done
this, and explained clearly the distinction between the ontological
and epistemological levels, Einstein would have understood it and
accepted it.

It seems to me that the Q.M. Bayesian folks should collect their papers
plus the much older work going back to the first half of last century
and put it in a book for more accessibility. There is a lot of
Bayesian literature on this Q.M. topic going back 50 years but it is a
really scattered. If the handful of Q.M. Bayesians that I met at my first
(and only) MaxEnt conference four years ago is a representative number, then
there must be a couple dozen people in the world actively working on this
topic at present.

Anyway, "if, then, should, could.." I've already written (too) many
times about Bayesian stuff in past years on this list and I have bigger
things on my plate, as you know.

I'll simply summarize with some things that a former Bayesian acquaintance
told me four years ago while I was writing a popular science article
about this Bayesian stuff.

Amara

----------------------
----------------------
A summary: "Who are The Bayesians?"
by A. Gottvald, September 1998.

The Bayesians assert that:

Our inference is always conditional to some prior information,
involving also our data. There is nothing like "unconditional
probability". All human's knowledge is _conditional_.

Probability of an event is interpreted as a state of our knowledge
about the event.

Probability of an event (and consequently, an information about the
event) is not an absolute physical attribute of the event, but rather
a model representing our state of knowledge abou thte event. For a
Bayesian, the information is neither a physically existing nor an
absolute "fluid" flowing from a transmitter to a receiver; the
information is unseparable from a prior state of our mind.

An orthodox 'frequentist' interpretation of probability, in terms of
"random variable", is only a very special case of a Bayesian concept
of probability. Bayesians does not use the concept of a "random
variable" approaching a probability in a limit, as it is too
restrictive and fuzzy for many phenomena.

A probability of probability represents a difference between a
stability of our state of knowledge about many events. E.g. consider a
stable probability assigned to a dice, versus an unstable probability
assigned ot an existence of life on Mars.

Using all prior knowledge (contextual information) available is the
most objective way to analyze our data (hypotheses). In general, also
the data provide some prior knowledge how to analyze them, and the
prior knowledge reduces the uncertainty of our inference.

A logical relationship between the event (and their probabilities)
does not imply a causal (physical) relationship between the events.
Here is an origin of a Mind Projection Fallacy, which is behind a huge
number of misconceptions and 'paradoxes' in mathematics (set theory,
information theory, Fourier transform,...) physics (quantum and
relativistic physics, potential, ...) philosophy (Bohr, Einstein,
Bohm, Popper, Penrose, ...) which puzzled a big part of science in the
[last] century. In contrast, the Bayesian know that when a new fossil
changes our picture about a dinosaur, it does not mean that we
physically changed something in Jurassic park.)

Bayes Theorem is only a multiplication rule of probability theory,
which shows a relationship between a posterior probability, a
likelihood of data to model, and prior probability. The Bayes Theorem
is only an important segment of the probability theory understood as
an extended logic of rational inference.

The prior probability and posterior probability are not necessarily
related in time. These concepts show just a different relationship to
the data to be analyzed.

The Bayesian methodologies approach the scientific inference from "first
principles", grasping an n-parametric event directly with an
n-dimensional posterior probability distribution. This general model
shows a systematic straightforward way to integrate out some nuisance
parameters, to compute maximum unbiased estimations of parameters, to
evaluate probabilities of hypotheses, etc.

As a practical rule, the Bayesian methodology (of inference) assign
practically identical probabilities to the events (parameter
estimations) only in an abstract limit, when no prior knowledge about
the event is available. When some prior information is available, the
Bayesian methodology is superior in detecting some existing and
refusing some non-existing phenomena.

In summary, the Bayesians deal with the uncertainty of our inference,
and its fundamental relationship to prior information. They clarify
some puzzling relationships between our data, our models, and our
prior and posterior knowledge. The Bayesians apply their systematic
methodology to see neither too much nor too little in our data. They
actually apply an extended log of scientific inference, which
translates our human's knowledge to some rational statements about our
external perceptions.
----------------------


--

***********************************************************************
Amara Graps, PhD             email: amara at amara.com
Computational Physics        vita:  ftp://ftp.amara.com/pub/resume.txt
Multiplex Answers            URL:   http://www.amara.com/
***********************************************************************
"There's only one thing more beautiful than a beautiful dream, and
that's a beautiful reality."        --Ashleigh Brilliant

-----------------wta-talk June 18, 2003

To: wta-talk at yahoogroups.com
From: Amara Graps <amara at amara.com>
Subject: Re: Bayes vs. LP
Cc: extropians at extropy.org

(ccing extropians too, only because this topic has appeared there before)

"bzr" <bzr at csd.net>, Sun, 15 Jun 2003:

>However, perhaps the easiest way to see that the Bayesian framework won't do
>as a comprehensive framework for science, and why it assuredly can't proxy
>as the whole of a philosophy of science, is to consider this problem:

>We have a penny.  We toss it.  What are the odds that we'll get heads?

>The answer:  0

>Zero?  Yes.  This is very counterintuitive, admittedly, particularly for
>statisticians.  However, the truth is there are no "odds" here at all.
>Penny tossing is deterministic.

>That being the case, if we are appraised of all the initial
>conditions of the toss, and possess a complete knowledge of the laws
>of physics, then we can predict with certainty what we will get
>(heads,or tails, or, very rarely, a coin on edge).  Even more
>interestingly (and counterintuitively), without any knowledge of
>statistics OR  knowlege of physics we can be sure that, provided the
>test surface is flat, we will get heads, tails, or a coin on its
>edge.

I think that using determinism in this way is putting up a smoke
screen in addition to missing the large picture of how scientists
intuitively do science. You have a real experiment, so it is
physical, and all propositions are testable. How do you define
determinism for this system? Your determinism is based on a model of
some physics, is it not? No matter how 'deterministic' something may
be, your prediction for the outcome of the coin toss is based on
data and a model and what other information you have about that
system. A Bayes discussion is always in the realm of epistemology,
i.e. how we know what we know.

Humans never know how nature _is_. All humans can do is make an
abstract physical description of nature. Scientific studies are how
we are able to process information in order to say some things about
that nature. Bayesian concepts makes this process explicit. A
Bayesian perspective of science says that any theory about reality
can have no consequences testable by us, unless that theory can also
describe what humans can see and know. Models, data, prior
information, in other words.

Note also how causality takes a side seat. A logical relationship
between the event (and their probabilities) does not imply a causal
(physical) relationship between the events. Sometimes Bayesians call
this the Mind Projection Fallacy, which is behind a huge number of
misconceptions and 'paradoxes' in mathematics (set theory,
information theory, Fourier transform,...) physics (quantum and
relativistic physics, potential, ...) philosophy (Bohr, Einstein,
Bohm, Popper, Penrose, ...).

Bayes Theorem is only a multiplication rule of probability theory,
which shows a relationship between a posterior probability, a
likelihood of data to model, and prior probability. The prior
probability and posterior probability are not necessarily related in
time. These concepts show just a different relationship to the data
to be analyzed. The Bayesian methodologies approach the scientific
inference from "first principles", grasping an n-parametric event
directly with an n-dimensional posterior probability distribution.


>The question of why statistical analysis "works" (to the extent that
>it does, and given an initial state of ignorance), or indeed the
>question of what conditions must pertain in order for statistical
>analysis to be appropriate, is not itself answerable by further
>statistical analysis.

No.

Some history. The Bayesian probabilistic ideas have been around
since the 1700s. Bernoulli, in 1713, recognized the distinction
between two definitions of probability: (1) probability as a measure
of the plausibility of an event with incomplete knowledge, and (2)
probability as the long-run frequency of occurrence of an event in a
sequence of repeated (sometimes hypothetical) experiments. The
former (1) is a general definition of probability adopted by the
Bayesians. The latter (2) is called the "frequentist" view,
sometimes called the "classical", "orthodox" or "sampling theory"
view.

Scientists who rely on frequentist definitions, while assigning
their uncertainties for their measurements, should be careful.  The
concept of sampling theory, or the statistical ensemble, in
astronomy, for example, is often not relevant.  A gamma-ray burst is
a unique event, observed once, and the astronomer needs to know what
uncertainty to place on the one data set he/she actually has, not on
thousands of other hypothetical gamma-ray burst events. And
similarly, the astronomer who needs to assign uncertainty to the
large-scale structure of the Universe needs to assign uncertainties
based on _our_ particular Universe, because there are not similar
Observations in each of the "thousands of universes like our own."

The version of Bayes' Theorem that statisticians use today is
actually the generalized version due to Laplace.  One particularly
nice example of Laplace's Bayesian work was his estimation of the
mass of Saturn, given orbital data from various astronomical
observatories about the mutual perturbations of Jupiter and Saturn,
and using a physical argument that Saturn's mass cannot be so small
that it would lose its rings or so large that it would disrupt the
Solar System. Laplace said, in his conclusion, that the mass of
Saturn was (1/3512) of the solar mass, and he gave a probability of
11,000 to 1 that the mass of Saturn lies within 1/100 of that value.
He should have placed a bet, because over the next 150 years, the
accumulation of data changed his estimate for the mass of Saturn by
only 0.63% ...

More references that might be useful:

General for scientists: (article)
A.L. Graps, "Probability Offers Link Between Theory and Reality,"
Scientific Computing World, October 1998.

Focusing more on epistemology: (book)

_Scientific Reasoning: The Bayesian Approach_ by Colin Howson and Peter
Urbach, 1989, Open Court Publishing.

Focusing on implementation: (books)

_Bayesian Statistics_ (2nd edition) by Peter M. Lee, Oxford
University Press, 1997.

_Data Analysis: A Bayesian Tutorial_, Sivia, D.S., Clarendon Press:
Oxford, 1996.

Martz, Harry and Waller, Ray, chapter: "Bayesian Methods" in
_Statistical Methods for Physical Science_, Editors: John L.
Stanford and Stephen Vardeman [Volume 28 of the Methods of
Experimental Physics], Academic Press, 1994, pg. 403-432.


Other useful papers on the web:


Epistemology Probabilized by Richard Jeffrey
http://www.princeton.edu/~bayesway/

Edwin Jaynes: Probability
http://bayes.wustl.edu/

"Probability in Quantum Theory",
"Clearing up Mysteries- the Original Goal".

"Role and Meaning of Subjective Probability: Some Comments
on Common Misconceptions." by Giulio D'Agostini
http://zeual1.roma1.infn.it/~agostini/prob+stat.html


Amara

--

********************************************************************
Amara Graps, PhD          email: amara at amara.com
Computational Physics     vita:  ftp://ftp.amara.com/pub/resume.txt
Multiplex Answers         URL:   http://www.amara.com/
********************************************************************
"The understanding of atomic physics is child's play compared with the
understanding of child's play."  -- David Kresch



More information about the extropy-chat mailing list