[extropy-chat] A Bayesian Looks at Climate Change

Neil H. neuronexmachina at gmail.com
Fri Apr 21 01:44:06 UTC 2006


On 4/20/06, Edmund Schaefer <edmund.schaefer at gmail.com> wrote:
>
> On 4/19/06, Martin Striz <mstriz at gmail.com> wrote:
> <snip>
>
> > On 4/19/06, Eliezer S. Yudkowsky <sentience at pobox.com> wrote:
> > No one in China has ever seen the Emperor of China, but everyone can
> > guess his height to within plus or minus one meter.  Therefore, by
> > polling a million Chinese and averaging their estimates, the law of
> > large numbers says we can get an estimate of the Emperor's height that
> > is accurate to within one millimeter.
>
> But I think that's the point.  You sampled a billion people in exactly
> the same way.  If you start with a number of studies, each with
> different methodologies, then you hope to minimize the  bias in each
> one.
>
>
> By taking lots of studies and averaging the findings together, you're
> polling studies. In the Emperor of China analogy, each citizen's estimate is
> a (highly inaccurate) study trying to answer the question "How tall is the
> Emperor?" This is not to say that studies of global warming are as
> methodologically flawed as the average Chinese person's guess as to the
> height of the emperor, but that averaging bad studies together, which may
> have correlated biases (say, hypothetically, the Emperor is depicted as
> being very tall and muscular in propaganda, and thus people tend to
> overestimate his height), does not give you a more accurate answer to your
> question.
>

I also recall another example, where there was a famous physics or chemistry
experiment which measured some sort of experimental value. For several years
after the experiment, experiments by other labs could be roughly modeled as
a (moving?) Gaussian with a mean around the original measurement. This
continued for some time, until someone with more confidence in their
apparatus finally published a new figure, and the cycle repeated, with
following experiments gravitating around the new figure.

I think the idea was that if scientists got a value which was too different
from what had been previously reported, and they weren't confident in their
experiment, the values tended to get discarded and the experiment retried.

Does anyone else recall hearing about this? I've tried some googling for it,
to on avail.

-- Neil
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060420/ff960e9f/attachment.html>


More information about the extropy-chat mailing list