[extropy-chat] calling all bayesians
Marc Geddes
marc_geddes at yahoo.co.nz
Fri May 13 05:55:20 UTC 2005
dgc:
>let's assume that the best estimator for p is the one
>with a
>50% confidence:
> p^^19=.5
> p= 19th root of .5, or about .965. That is about
>half the time we
>test a set of 19 droobs, all of them
>will pass if there is a 3.5% failure rate.
>
>Now, .965^^130=.0097 or thereabouts.
That's how I reasoned Dan! It may not actually be a
bad rough estimate, but after thinking about it
over-night I realized that Eliezer is right again
dammit. There's not enough information given in the
original problem for a proper mathematical analysis.
You need empirical data on the reliability of
manufacturing techniques.
Now if the original problem is rephrased so that it's
a question about the set of all possible such
situations it does have a definite answer. (i.e.
given the space of all possible such situations, in
how many with 19 droobs tested good did the other 131
test good?). So I suggest that spike run simulations
testing all possible failure rates and pretend that
all of the simulations as a whole are the
'multiverse'. Then find the total number of
simulations in which there were 131 good droobs and
ask what proportion of these had 19 spares all testing
good (but be sure to add up the results from all
possible failure rate simulations to get a SINGLE
number - i.e spike must not treat the different
simulated failure rates as being seperate domains - he
needs to treat the whole set as a single entity to
simulate the 'multiverse').
Of course the figure obtained that way would be
assuming equal probabilities for failure rates and we
don't know whether that's a good assumption in the
case of manufacturing. So as I said, it's really an
empirical question.
But if we look at the context of the problem i.e
(manufacturing, testing) , it suggests that a new
manufacturing technique is being tested. For any new
manufacturing technique , you'd have to think that
there's far more ways for things to go wrong than to
go right, so it would be reasonable to choose the
lowest prior probabilities of success consistent with
the experimental results. So for instance, testing 19
droobs in a row and finding them all good one could
only logically conclude that the probability a given
droob is good is likely anywhere between 95% - 100%,
but given the empirical context (manufacturing,
testing) it would seem sensible to opt with for the
lowest sensible figure (95%).
As to Mike, sorry but the context was not 'sampling
things out of the box' (in that case we *expect* the
goods to work and should assign high prior
probabilities - once goods are packaged and sold we
expect them to work). The context was 'manufacturing
and testing'. In that context we would expect that
there's far more ways for new manufucturing techniques
to fail than succeed, so we should assign low prior
probabilities.
I give up. How in the hell are we ever going to build
an FAI if we're stumped by silly things like droobs
and grues? Why are we so stupid dammit?
---
THE BRAIN is wider than the sky,
For, put them side by side,
The one the other will include
With ease, and you beside.
-Emily Dickinson
'The brain is wider than the sky'
http://www.bartleby.com/113/1126.html
---
Please visit my web-site:
Mathematics, Mind and Matter
http://www.riemannai.org/
---
Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com
More information about the extropy-chat
mailing list