[extropy-chat] Bayes, crackpots and psi

Eliezer Yudkowsky sentience at pobox.com
Tue Dec 21 00:39:33 UTC 2004


Damien Broderick wrote:
> At 01:26 AM 12/20/2004 -0500, Eliezer wrote:
> 
>> I have to ask myself what kind of researcher goes to all the effort of 
>> getting a Ph.D. in parapsychology without being discouraged.
> 
> Perhaps the same kind who goes into AI without being discouraged.

Yes!  This is also a severe problem in AI!  I went into AI because I 
thought the fate of the world was at stake, and therefore the problem 
absolutely had to be solved as early as possible, even if it seemed 
impossibly difficult.  Even if the solution were 30 years off, I thought, I 
had to start immediately, for the sake of a hundred and fifty thousand 
souls annihilated each day.  And therefore *despite* my realization that AI 
was huge and scary and incredibly hard to solve, I stuck with the problem, 
kept learning and studying and thinking, long enough to realize that there 
were big powerful solutions to match the big powerful problems.  But why 
would *other* students, non-Singularitarians, still tackle the task of AGI 
after coming to that preliminary apprehension of the mountainous difficulty 
of the problem?  Maybe the field scares off most researchers who are not 
massively overconfident.  And they would take their tiny programs, and 
praise them to the stars, and to the media.  And AI would acquire a poor 
reputation for overhyped promises, and a habit of inflating small 
techniques out of all proportion.  I think, Damien, that a good part of the 
pathology of AI academia, is due to all the smart *non-overconfident* 
people having been scared away by a very scary-looking problem.  Why 
*would* anyone tackle a problem that huge, this early, if they lacked the 
belief that the fate of the world at stake?

>> Give me lottery numbers!
> 
> Give me some working AI code!

Hey, I'm not the one claiming to have already demonstrated statistically 
reliable reproducible precognition in the laboratory.  Why *isn't* it 
straightforward, given a couple of thousand Ss, to produce winning lottery 
numbers?  According to the claims made by the parapsychology researchers, 
they should easily be able to predict the winning Mega Ball.  Having 
demonstrated this on a small scale, or bought ten thousand $2 winner 
tickets for a dollar apiece, it would be easy enough to scale up to a 
number of subjects that would let them predict the entire lottery number. 
Especially if they used an error-correcting code in the presentation of the 
coded Rhine cards to the Ss, who of course would not be told about the 
technological application of their precognition.

(You can buy Mega Millions tickets 15 minutes before the deadline.  Source: 
  http://www.megamillions.com/aboutus/lottery_faq.asp)

But, ooh, somehow I can predict that the phenomenon will mysteriously 
vanish as soon as we apply it to anything worthwhile!  The thing about 
*real* statistically significant but small effects, Damien, is that they 
*don't* go away as soon as someone thinks up a good technological application.

>> Claim Randi's prize!
> 
> Eliezer, I thought you'd read Dennis Rawlins' excoriating essay on the 
> sTARBABY fiasco? Has this portion slipped your mind? (It's a comment 
> that echoes very many I've heard elsewhere.):

True.  I withdraw the above comment about Randi's Prize, and apologize.  I 
knew better, but I got caught up in the heat of the argument.

But, based on the claims made so far, with all the strength you attribute 
to their statistics, the parapsychologists should easily be able to win the 
lottery.  If nothing else, they should be able to easily double their money 
by predicting the Mega Ball in the Mega Millions lottery.  Real effects 
don't get smaller when you try to replicate them and then vanish entirely 
when you try to apply them in the real world.

> Cambridge Physics Nobelist Brian Josephson recently complained about 
> Randi's use of PR rather than scientific criteria for`failing' a 
> paranormal claimant. I see his point, but have to admit that Randi was 
> justified, since both claimants and Randi always agree in advance of the 
> test to certain canons of success or failure. The claimant achieved her 
> (bizarre) task to the muted tune of p < 0.02, but did not do as well as 
> she'd said she would. Still, Josephson's complaint might be worth a 
> glance (he's also an informed cold fusion fan):

p < 0.02?  I bet Randi has been through this more than 50 times.  He was 
foolish to permit 5 hits out of 7 as confirmation, merely p < 0.005.  If he 
keeps that up, he's going to end up with egg on his face after 200 tries. 
I'm glad I don't have money riding on the Prize.  Maybe Randi has 
(unforgivably) resorted to shenanigans to avoid paying, but *if so*, it's 
no wonder to me because he's declaring unsustainably low standards of 
proof.  Two wrongs don't make a right, obviously - it's just more reason to 
be wary of the Randi Prize as purported evidence.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list