[ExI] Medical "research"?
Anders Sandberg
anders at aleph.se
Sat Apr 7 07:43:27 UTC 2012
On 2012-04-06 23:51, Kelly Anderson wrote:
> On Fri, Apr 6, 2012 at 8:04 AM, BillK<pharos at gmail.com> wrote:
>> In cancer science, many "discoveries" don't hold up
>> By Sharon Begley, Reuters
>> Mar. 28, 2012 11:09AM PDT
>>
>> NEW YORK (Reuters) - A former researcher at Amgen Inc has found that
>> many basic studies on cancer -- a high proportion of them from
>> university labs -- are unreliable, with grim consequences for
>> producing new medicines in the future.
Bismark's quote about sausages and politics (you are better off not
knowing how they are made) is of course true for a lot of science too.
An annoyingly large amount is done shoddily, or suffers from inherent
uncertainty that makes individual studies very weak in guiding us
towards truth.
> It
> doesn't surprise me in the least that scientific results are hard to
> duplicate. Yet the scientific method requires that results be
> duplicated. Nobody of course will publish duplicate work unless you do
> a hell of a lot of it, like these companies did.
Yup. The current setup of rewards in academia is known to be broken. It
produces a positive publication bias (findings that something works are
published in favor of negative findings), discourages replication,
favors writing lots of papers over writing good papers, retraction or
marking of erroneous results not clearly done, calls for
interdisciplinary work yet does not give it funding, funding biases
research, gripe gripe gripe...
Case in point: the multiple failed replications of Bem's claims of ESP
had a very hard time getting published, despite the high profile of the
original claim. (And of course, even after they got published normal
media did not care to mention it - there is another slew of biases in
how science is presented to the public and policymakers. See
http://www.overcomingbias.com/2007/08/media-risk-bias.html
)
Note that not all attempts at reducing bias or uncertainty will work
well either. In many domains having more studies just produces extra
confusion (since they produce fairly random results): what is needed is
very big interventional studies, something that is rarely done since it
1) costs a lot, 2) might have ethical problems, and 3) requires
competing research groups to join forces or get kicked out of funding by
the leading coalition. As I calculated at the end of this post
http://www.overcomingbias.com/2007/01/supping_with_th.html
reducing biasing funding (say from tobacco companies) might also reduce
progress, since now there are fewer (if less biased) studies.
Inventing better ways of doing science should be a high priority.
> Bart Kosko:
> ""Scientists have in large part treated fuzzy theory and fuzzy
> theorists badly. Some of us asked for it. All of us got it. In the end
> that process strengthens fuzzy theory and fuzzy theorists. Adversity,
> like muscle stress, works that way.
Of course, a lot of that has to do with the fact that fuzzy logic is not
a panacea to handling uncertainty, while its adherents claimed it was.
It is not obvious that there is any benefit over a Bayesian treatment,
and the latter has the advantage of being more mathematically stringent.
--
Anders Sandberg
Future of Humanity Institute
Oxford University
More information about the extropy-chat
mailing list