[ExI] Beating on the closed door of SCIENCE

Damien Broderick thespike at satx.rr.com
Mon Sep 8 19:54:22 UTC 2008


At 07:16 PM 9/8/2008 +0200, Sondre wrote:

>If the response from the user occurs ahead of time at which the photo is
>displayed, you could potentially replace the randomly generated photo (at
>the split second it's about to be rendered) within the timeframe of the user
>response and record if that actually has some effect, compared to a
>completely random display of photos.

So you're not replacing one unknowable random stimulus with another 
random stimulus but with a known determinate stimulus? If you did 
this occasionally, I suspect it would just reduce the statistics, 
eventually to noise. The point is that variations in physiological 
state are always somewhat volatile; you're unlikely to look at a 
single instance and say "Woah!"

I once had to wear a Holter monitor for 24 hours, which recorded 
cardiac variations; a complex analysis later would decide later that 
the fluctuations in my heart signals were within the normal range. 
Much the same here. If the Holter mechanism had thrown in a bunch of 
extraneous noise, this might have masked the underlying effect, but 
wouldn't have proved that my heart was normal or wasn't beating (or whatever).

>If you then record the same stimuli
>with the wrong photo, then the test results would be invalidated? Wouldn't
>they?

A much more interesting investigation might consider "remote 
viewing," where a complex interpretative process takes place inside 
the mind of the "viewer" who attempts to respond to one of, say, four 
possible future images/locations that are as orthogonal to each other 
as feasible. My model of this process is that s/he goes into a state 
where images and affects swirl through the preconscious, and are 
sorted, discarded, or retained by whatever this strange process is, a 
bit like what happens when we gaze dreamily at patterns in clouds or 
on the ceiling. But suppose the allegedly random selection is biased 
deliberately (but double-blinded, obviously, so neither experimenter 
nor "viewer" knows at the time the weighting of the biases), so that 
Option 3 is liable to be chosen as target 90% rather than 25% of the 
time. In repeated runs of this test (using different options each 
time, of course), will viewers respond to the *more probable* target 
most of the time, or to the actual option that will really be chosen? 
IIRC, results show a heightened correlation with the actual target, 
not the more probable one.

You can all stop rolling your eyes now.

Damien Broderick 




More information about the extropy-chat mailing list