[extropy-chat] Bayes, crackpots and psi

Eliezer Yudkowsky sentience at pobox.com
Mon Dec 20 06:26:44 UTC 2004


Damien Broderick wrote:
> 
> For an interesting insight in what's going on here (both from me, and 
> from adamant skeptics like John and Eliezer), take a look at these 
> extracts from `Opposites detract', by Robert Matthews, visiting reader 
> in science at Aston University, Birmingham, UK, in New Scientist vol 181 
> issue 2438 - 13 March 2004, page 38:
> ======================
> [...]
> For years, well-designed studies carried out by researchers at respected
> institutions have produced evidence for the reality of ESP. The results
> are often more impressive than the outcome of clinical drug trials
> because they show a more pronounced effect and have greater statistical
> significance. What's more, ESP experiments have been replicated and
> their results are as consistent as many medical trials - and even more
> so in some cases (see Chart). In short, by all the normal rules for
> assessing scientific evidence, the case for ESP has been made. And yet
> most scientists still refuse to believe the findings, maintaining that
> ESP simply does not exist.

I should note that I also doubt medical trials that report marginal but 
statistically significant effects, on the basis that if parapsychologists 
and exit polls can be so wrong, the standard of proof is not high enough.

> Whether they realise it or not, scientists' thinking is influenced by
> Bayesian reasoning, and nowhere is this more apparent than in attitudes
> towards ESP research. By the standards of conventional science, the
> weight of evidence is now very impressive, but the scientific community
> refuses to accept the reality of ESP. Instead, they insist that
> extraordinary claims demand extraordinary evidence.

Or to put it another way, extraordinary claims demand large effect sizes. 
If you've got someone flinging teacups around the room by telekinesis or 
predicting lottery numbers, that is a far better kind of evidence than a 
"statistically significant" marginal effect.  I have to take into account 
the publication bias, and the fact that I don't trust psi researchers. 
More than once, as I said, some amazing result in psi has turned out to be 
simply faked.

I should also note that there are papers on statistically significant 
marginal effects that show as large of an "anti-psi" effect as a psi 
effect, or that telekinesis works just as well if the targets are selected 
after the attempted sensing.  I regard that as strong evidence that it is 
all just bad statistics.  Why?  Because on the hypothesis of bad 
statistics, we expect anti-psi just as significant as psi.  We expect bogus 
statistical significance to be unaffected by such manipulations as 
selecting the targets after the attempted telekinesis takes place.  As for 
attempts to say that it is precognition... good heavens, now you're adding 
in closed timelike curves?  Imagine if Wood had removed the aluminum prism, 
Blondlot had stared wildly for a moment, and then cried:  "Why, N-Rays are 
focused by a place aluminum prisms have previously been!  They must travel 
through time!"  He would have been laughed out of the house, and rightly 
so.  Poor statistics and overactive imaginations are not affected by 
removing the prism, and that is why anti-psi, and versions of experiments 
with targets selected afterward, et cetera, still show statistically 
significant marginal effects.

I want reproducibility.  I want p<10^-6 - yes, 10^-6.  I want useful 
technologies or macroscopically visible effects.  Extraordinary claims 
require extraordinary proof:  *Statistical significance of weak effects is 
not good enough*, and I regard psi as an excellent demonstration of this - 
that the standard tests of academia are too weak to weed out bogus claims.

> The famous criterion of "falsifiability", the notion that scientific
> theories can never be proved, only disproved, is therefore a comforting
> myth. In reality, scientists can (and do) dream up ways to explain away
> awkward findings. The only difference with parapsychology is that
> scientists have no qualms about invoking everything from incompetence to
> grand conspiracy to explain the results.

Many past "successful" psi experiments have, *in fact*, been the products 
of everything from incompetence to grand conspiracy.  I have to ask myself 
what kind of researcher goes to all the effort of getting a Ph.D. in 
parapsychology without being discouraged.  Sure, maybe some of them are 
honest and competent researchers.  Maybe the honest and competent ones 
report unsuccessful results in minor journals, except that one time out of 
20 they achieve a "statistically significant" result by chance.  And then 
there are the researchers who are dishonest, or incompetent, or who hope 
too much, or who apply ten statistical tests per experiment.  We shall hear 
more from them than from the honest researchers, to be sure.

Give me flying teacups!  Give me lottery numbers!  Claim Randi's prize!  Do 
something *blatant*, and then I'll pay attention.  What parapsychologists 
have demonstrated is that an academic field can subsist on marginal but 
"statistically significant" effects in the absence of any real subject 
matter.  One does wonder how many other fields are doing the same.  Maybe 
the time has come for the journals of the world to redefine p<0.001 as the 
statistical significance level - the stronger test wouldn't be strong 
enough to kill the field of parapsychology, but it would hurt it.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list