[extropy-chat] Re: Bad Bayesian - no biscuit!

Brett Paatsch bpaatsch at bigpond.net.au
Sat Jan 22 08:59:26 UTC 2005


Eliezer Yudkowsky wrote:

> Brett Paatsch wrote:
> >
> >     "Imagine that you wake up one morning and your left arm
> >      has been replaced by a blue tentacle. The blue tentacle
> >      obeys your motor commands - you can use it to pick up
> >      glasses, drive a car, etc. How would you explain this
> >      hypothetical scenario? Take a moment to ponder this
> >      puzzle before continuing."
> >
> > So I did imagine it. I imagined it in good faith, and I imagined it
> > consistent with a spirit of exploration and good will built that Eliezer 
> > had established through the early part of his essay.
>
> What about the spirit of cunning plots and mischief?

That's less scarce.

> I'm trying to forge rationality into a new and more coherent art,
> reusing my l33t build-a-mind-from-scratch skillz to go beyond that
> accumulated handicraft of rationality passed down from generation
> to generation.

Good stuff.    But "l33t" is just arbitrary to me.

> > I wrote (and I quote):
> >
> >     "I'd "explain" it provisionally as some surprising organisation
> >     of people had entered my house and replaced my arm whilst
> >     I slept with technology I didn't know existed.
> >
> >     I'd be bewildered. Frightened even. But I'd not think "magic"
> >     had occurred".

> > And then, with the heightened curiosity of one who has escalated
> > their commitment I went back to see what Eliezer the Bayesian,
> > Eliezer the spreader-of-analogical-probability-clay-mass would have 
> > done.
> >
> > And he'd written this.
> >
> >    "How would I explain the event of my left arm being replaced
> >     by a blue tentacle? The answer is that I wouldn't. It isn't going
> >     to happen."

> The ideal of traditional rationality is that reality is allowed to tell
> you anything it wants, and you ought to shut up and listen - a stance
> arising from the sad human tendency to deny experimental evidence
> when it conflicts with something more valuable, like hope or authority.

Or as Feynman ([para 39] in accompanying post) said:

"This method [science] is based on the principle that observation is the
judge of whether something is so or not.  All other aspects and
characteristics of science can be understood directly when we understand
that observation is the ultimate and final judge of the truth of an idea.
But "prove" used in this way really means "test", in the same way that
hundred-proof alcohol is a test of the alcohol, and for people today the
idea really should be translated as, "The exception tests the rule." Or, put
another way, "The exception proves that the rule is wrong". That is the
principle of science. If there is an exception to any rule, and if it can be
proved by observation, that rule is wrong."


> .. The hypothesis of conservation of momentum is that momentum is 
> conserved 100.00000% of the time.  We may be uncertain, but the hypothesis 
> of "conservation of momentum" hypothesizes a state of
> affairs in which reality is *not* uncertain; a reality in which it is
> *absolutely certain* that momentum will be conserved on each and
> every occasion.

> .. If someone reports an experiment that violates conservation of 
> momentum, you shouldn't chalk it down to a rare exception to the
> general rule (maybe someone negotiated the laws of physics down a
> little from their extreme and unreasonable position that momentum should 
> be conserved on every single occasion).

I agree that is what one should not do given someone else's report.
The separate experiential world that each individual truth-seeker lives
in means that someone else's reported observation is not our own
observation.  Someone elses observation that we haven't shared
can only be factored into our world view as having some probability
of being true.

If my worldview which includes my understanding of the law of
constervation of momentum was as experientially rich as someone
like Feynman's, in other words as rich as someone who had had
as many opportunities to apply the scientific method in a lab many
times over many years in his life and so to personally observe nature
in many of its less common manifestations as well as in its common
ones, then, when confronted with a report of an observation that
didn't match either my worldview (model) or my experience I'd
make a judgement as to the likely veracity of the claim (and therefore
whether to bother trying to observe for myself) based on a set of
priors.  I'd consider the prior probability of my experience of
physical science to date being incomplete, (experience itself can't
be wrong just miss classified), in relation to the prior probability
of that other person making the claim to have observed something
I haven't yet observed being right.

Whether I'd bother to try and check their observation to see if I
would see the same thing as them would depend on what else I
have to do with my time and on my assessment of the likelihood
of their observation being right given what I know of them and
my model of them.

Most of us, don't have the experiential background or same historical
set of personal observations as Feynman, so our confidence in the law
of the conservation of momentum necessarily comes from a difference
place than Feynman's.

>..  I think that part of the Way is trying to fit yourself to the real 
>world,
> to the actual statistical frequency of events - deliberately thinking 
> about the statistical likelihood of any sample case presented for your
> attention.

And I agree that that is a part. An oft neglected part. But it is only a
part of  the Way.

> No one in all human history has ever woken up with a functioning
> tentacle in place of their arm.  You should have noticed that when
> I asked you to find an explanation for it.

Ah but don't you see. No one in all of human history has ever woken
up with a functioning tentacle in place of their arm - to the best of
*my* current knowledge only. I didn't forget that that was to the best
of *my* current knowledge only when I entered into the spirit of your
hypothetical. I didn't forget that my current knowledge is knowledge
acquired in a particular way and that ultimately it is provisional
knowledge only. I didn't have to have considered or devoted
mindspace to the hypothetical you put before you put it. I thought of it
only when you invited me to imagine it.

When you say that there is nothing wrong with being bewildered and
not knowing under certain circumstances, I agree, that there is nothing
*wrong* it.  But mere bewilderment, mere stocktaking and humility is
not an "explanation".

> Occasionally I tread on the futile task of trying to persuade people not 
> to buy lottery tickets, and they say something along the line of "Someone 
> has to win!" or "You can't win if you don't play!"  To which the answer 
> is, "'Someone' will not be you.  You will not win the lottery, period.  I 
> could make a hundred thousand statements
> of equal strength and not be wrong even once.

That's not *"the"* answer it is just *an* answer. I see your point.
But I think your answer is suboptimal because it doesn't explain and
so cannot persuade (and persuasion was your apparent aim).

Yes your math is *likely* to be valid. The odds of any person
winning the lotteries that I know of are certainly greater than
100,000 : 1 against them, so it's true that *probably* you would not
be wrong in your statement even once.  But it is not certain. Every
time you make that statement you increase your chances of being
wrong in some real case.

> Let us learn to live in this universe the way it really is, attending to 
> the real frequency of events instead of the frequency of media reports of 
> events.  When someone asks us to imagine a magical outcome, let us forget 
> all of the novels we have read, and all the
> movies we have seen, and all the hopes of our childhood, and
> remember that the observed frequency is zero.

This advice taken, might lead to better policy decisions by the majority
but perhaps worse ones by an already empowered minority.

Let me give you an example. In the recent discussion John C Wright
finds god, Damien didn't forget all the novels he had read, the movies
he had seen etc in couching his arguments to John C Wright, quite to
the contrary, he integrated his understanding of such cultural biases,
and pointed out that John C Wright had had the sort of experience
that 'fit' with his culture rather than one that would have 'fit' with a
different culture.

Feynman ought not be encouraged to forget his experiences and
observations of the physical world over many years as first a child,
then an undergrad, then a postgrad etc.. His experiences and
observations aren't to be forgotten, they are rather to be carefully
sorted and integrated with respect to each other. What we observe
of the real world is always telling us something of the real world. The
important thing with our models is to integrate correctly what it is
that we are observing.

> At the same time, let's not forget how ridiculous the 20th century would 
> have sounded if you'd reported it to a 19th-century listener.
> But reality is very constrained in what kind of ridiculousness it
> presents us with.  Not one of the ridiculous things that happened in the 
> 20th century violated conservation of momentum.

Then conservation of momentum was a good extrapolation from the
observations. It was better (more useful) to have extrapolated a law
of conservation of momentum then to have not done.

> Oh... and if you *do* wake up with a tentacle in place of an arm...
> it's probably not because anyone snuck into your room; there must
> be a simpler, more likely explanation you didn't think of, or the event 
> wouldn't have happened.

My explanation was only provisional so if it happens I'll be open to
alternative explanations. And if it happens I won't have to throw
away all my experiences or forget stuff to explain it. I will only have
to change my model and I'll only have to change it in certain ways.

My model might differ from your model or say Feynman's model not
because of differences in rationality but because of differences in
experiences and differences in how I have integrated my experiences
into my personal worldview.

Even in the few seconds I was willing to spare for an "explanation"
the explanation that I proferred gives a hint to where I would start
looking for ways to reallign my extrapolated current map of the
terrain with the new experience of the actual terrain.


> When you have only a poor explanation, one that doesn't make things 
> ordinary in retrospect, just admit you don't have an
> explanation, and keep going.  Poor explanations very, very rarely
> turn out to be actually correct.

I don't think that this is right, or that it is a logical conclusion to draw
from the better parts of your argument in your essay. We have maps
of the terrain of reality because we need them. Maps have utility. If
you give me a poor map and I know nothing of you and find that the
map is wrong then, in that case yes, perhaps I might be better off
without that map altogether, but if the map I have is one that I have
constructed myself, then when I find it differs from the terrain I can
just correct or improve the map.

If you are a rational Bayesian truthseeker drawing a map and I get
a look at your map, then what I know about you will tell me
something about where your map is likely to be reliable and where
it is likely to be more riskly for me to rely on it.

I stand in a different position to your map than I do to a map that
I've drawn for myself based on my experiences of the terrain. And
of course vice versa.

>  A gang of people sneaking into your room with unknown technology is a 
> poor explanation.  Whatever the real explanation
> was, it  wouldn't be that.

I think you can only establish that it's poor (for others than you) in
relation to the provision of a better one.  "I don't know", whilst a fair
and honest answer, is not any sort of explanation. My answer shows
you I don't know but doesn't leave you (or importantly) me merely
and completely bewildered. It gives me things to check.

I'd check first my premises about the state of my knowledge of
technology and of the state of my knowledge about what current
organisations of people could do rather than first checking the state
of my knowledge in some other area. I wouldn't be convinced that
the explanation had to lie in those domains I'd check first, but *I'd*
start looking there rather than elsewhere to correct *my* model.

>  Whatever the real explanation was, it  wouldn't be that.  If that's the
> best you can do, then "I wasn't expecting this and I have no clue why
> it happened or what will happen next" is a far superior answer.

Nah :-)

> In real life, sometimes we don't know what happened.

Of course that is true.

> .. If I'd sworn that I really did possess some concrete reason to
> anticipate that you might *actually* wake up with a tentacle, and
> asked you to guess my good reason, "I don't know" would have
> been the correct answer (taking my rationality as a fixed given).

I agree with you here, essentially. But that's a different case.

And it is very hard for us as individuals to take other's "rationalities"
as givens when we don't get to see the others observations as our
own observations. Second (or more) hand "observations" have to
be discounted to some extend on first hand ones.

> ..  I don't think you could be a Bayesian without knowing it, unless you 
> had unwittingly demanded that people be principled
> about assigning prior probabilities, or some such stance which
> today is commonly known as "Bayesian".

Informally, I think I may have been doing "some such",  but in any
case having the "label" will allow me to do it more efficiently and
perhaps especially with other Bayesians.

>...  My point is that you should be wary of probabilities which are, 
>*given* the dominant physical hypothesis, actually zero or effectively 
>zero.

I get your point.

I also get that "laws of science" will feature as very probable priors
in most rational world views.

I wonder if you get my point which is that two rationalists, both
Bayesians, will have different prior probabilities because of their
different experiences and the different maps of reality that they
have constructed from their experiences. And others are part of
what reality is.

Where these two (or more) can come together and link their
maps from time to time is where they recognize that both are
committed to constructing a personal worldview that eliminates
inconsistencies and that both know the other will also experience
cognitive dissonance where its shown to them that their informal
(or formal) probabilities assignments don't sum to 1.

> If your explanation:  "A secret organization of people entered
> my house and replaced my arm with a tentacle using unknown
> technology" doesn't make you anticipate (even just a little)
> waking up with a tentacle tomorrow in this our real world, then
> it's a poor explanation.

The exploration of the hypothetical made me think about it.
It hasn't made me anticipate it. I don't assign any greater
probability to that prior (that chance that I'd wake up with an arm
replaced by a tentacle) now than I did before.

But if the prior became not an infinitesimal prior but a fact then I
don't think my process of trying to explain it to myself would be
affected. I'd look to correct my map with the information from the
territory. I'd know that my map isn't your map. My uncertainties
or degrees of uncertainties aren't going to correlate exactly with
yours, or indeed, with any other map-making, worldview-holding,
person.

I'd start the process of trying to explore differently to you unless we
happened to be in the same unlikely circumstance together and then
I'd be interested in your map as well as mine when mine seemed
doubtful. And perhaps vice versa.

> For it is this, our real world, in which you must live.  Nor should
> you bother trying to develop a better explanation.  For in this,
> our real world, you will have no need of it.

As general purpose advice that isn't bad. But I don't personally *just*
want to live in the world.  I want to be an agent for change in the
world. I want to steer the future world towards a course that suits
me better than will happen if I only "don't know".

"Don't know" isn't a philosophy of, or policy for, action. It's just a
starting point and a staging post. It's a recognition that the map is
wrong - and so necessary, but not sufficient for correcting the map
and then getting on. We need to retain our willingness to try to
"explain".

Progress depends on people (as change agents) being willing to
stick their necks out to try to explain.

Brett Paatsch 





More information about the extropy-chat mailing list