[extropy-chat] calling all bayesians
Eliezer S. Yudkowsky
sentience at pobox.com
Thu May 12 18:55:52 UTC 2005
> Nowthen, here is the interesting part. A freem only requires
> 130 droobs, not 131. A failed droob was discovered in the 130,
> so it was replaced by one of the 20 spares. So that leaves 19
> spares and possibly some information regarding the reliability
> of a droob. If I assume the theoretical reliability of
> the droobs at one bad in 130, then the MC sim gives an answer
> for the probability that the remaining 130 are all good (~37%).
Um, that number is just 1/e, I think.
> Testing 19 and finding all good tells me almost nothing, because
> that is the expected outcome (~86%). But without further info,
> I don't know that droobian reliability is one in 130 bad. I
> fall into a kind of circular reasoning.
> Does this conclusion agree with a Bayesian approach?
That bad droob was extremely relevant information, Spike, you should have told
me that at the outset; it means you know that the failure rate isn't on the
order of zero. Also it sounds like what you want is a mean-time-to-failure
estimate, unless the badness is a fixed property of the droob.
Having discovered one failed droob in 130, whether testing the other 19 is
interesting evidence about the other 129 depends on how much stress the other
129 have been subjected to. If the other 129 have been placed in situations
that would cause them to definitely fail if they were bad, then testing 19
tells you nothing, of course. If the very first time you stressed any droob
sufficiently to make it fail, that droob failed, and none of the other droobs
have been tested, then you potentially have a very serious problem and you
should try testing some of the 19.
What Bayesian reasoning will do, in a case like this, is tell you how much a
given test result or observation favors hypothesis A over hypothesis B. You
have to provide hypothesis A, hypothesis B, and the prior likelihoods, I'm
afraid. So if you say, for example, that you think a failure rate of 1 in 130
is reasonable, and someone else says that a failure rate of 1 in 10 is also
reasonable, Bayes can tell you how much the success or failure of 19 droobs
would favor one hypothesis over the other. It doesn't say what prior
likelihood to assign to these hypotheses, although depending on other
specifications about mechanisms, it may tell you that the observed data
already significantly favors one hypothesis over the other.
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat