[extropy-chat] ESP controls

Ben Goertzel ben at goertzel.org
Sun Feb 11 20:11:20 UTC 2007


Damien,
>
>> I assume that statisticians studying psi experiments have attempted to
>> account for this phenomenon, but I don't know exactly how they have done
>> so....
>
> Yes, of course they have. The topic remains somewhat controversial 
> among statisticians, but Prof. Jessica Utts mentions it at, for random 
> example, http://anson.ucdavis.edu/~utts/91rmp.html  < Following 
> Rosenthal (1984), the authors calculated the "fail-safe N" indicating 
> the number of unreported studies that would have to be sitting in file 
> drawers in order to negate the significant effect. They found N = 14, 
> 268, or a ratio of 46 unreported studies for each one reported. >  
> Given how time-intensive these trials are, and how few labs are doing 
> them, such a "cover up" is extremely unlikely.
>

OK, fair enough.  I agree this is the right kind of analysis to be done, 
and I'm glad someone has done it.

>> The level of BS in the psi literature is far higher than in the CF
>> literature
>
> I seriously doubt that. One has to use some elementary common sense in 
> segregating serious work (done at Princeton and Edinburgh 
> universities, for example) from the idiots, telephone "psychics" and 
> psychotic bloggers and from exploratory work later improved after 
> review and criticism. 
OK, that's a fair point.

Regarding the Princeton work, however, how do you respond to criticisms 
of their statistics, e.g.

http://www.inblogs.net/goodmath/2006/06/rebutting-pear-index.html

Have these (or similar) criticisms of their statistical methodology been 
addressed by the PEAR folks or supporters somewhere?

Basically, the criticism is: The PEAR guys found some obscure 
statistical flukes in their data.  But, in any large dataset there are 
going to be **some** obscure statistical flukes.  Questions are:

a) did they find these flukes at time T, and then observe in data 
gathered after time T that these flukes still existed

b) how thoroughly did they rule out the possibility that these flukes 
could be due to some obscure biases in the experimental equipment, the 
experimental setup, etc.

I am mistrustful of results that consist of obscure, minor statistical 
effects.  [Certainly this is qualitatively different from the reported 
CF effects in which large bursts of excess energy have been repeatedly 
reported by McKubre and others.]

Are there other published psi results that do not involve recognizing 
small patterns in obscure statistics in large datasets?  Or is this the 
nature of all the results that have been gathered?  [I'm not saying the 
results should be dismissed because they are of this nature, but I would 
of course prefer to look at results that display more marked and 
obviously methodologically correctly obtained patterns]
> One of the features I like about presentiment is that significant 
> responses *in advance of stimuli* can and has been found in old 
> instrumented response data prepared by neuroscientists such as Damasio 
> for entirely different purposes (as one would expect if the phenomenon 
> is real).
>

Can you point me to a reference on the latter?

Ben




More information about the extropy-chat mailing list