[extropy-chat] Re: Belated remarks on the usefulness of medicine
rafal.smigrodzki at gmail.com
Mon Nov 28 21:35:44 UTC 2005
Again I answer belatedly, for which I am sorry. If you don't mind, I am
sending this response to the ExI list. Posts extolling the virtues of North
Korea and government mind control distract me too much, so I unsubscribed
On 10/15/05, Robin Hanson <rhanson at gmu.edu> wrote:
> You are doctor, so you must choose how to invest your riches. And
> that choice implies an opinion about the quality of advisor studies.
> Do you invest in an actively managed investment fund, or in an index
> (passive) fund?
### I don't have riches, I work for a biotech company with no money but what
little I have is in the following investments: two stocks which I chose
based on my personal knowledge of facts which are not public knowledge (one
is very successful, and I still have hope about the other), a mid-cap index
fund, another index fund, a commodity fund (bought by my wife) which I
intend to convert to index in January, and about 200$ in two managed funds
which have consistently beaten the relevant benchmarks for the last 15 years
or so. I also happen to be able to discern the lack of differences between
the 15-year yields of most managed funds versus index funds, and I am
superficially familiar with the efficient markets theory, and with the work
of Fisher Black (second-hand of course), so I have the theoretical
background and the simple observations sufficient to reject exaggerated
claims of some advisers.
Now, back to our healthcare discussion:
>A comment about my "giving the benefit of the doubt to docs" - I
> >don't give the benefit of the doubt to docs. If there are
> >complicating factors that might make me doubt a specific claim, I
> >doubt it. If there are no complicating factors, I trust the specific
> For *every* clinical study there is the possibility of these
> complicating factors: fraud, missing regression factors, regression
> selection biases, publication selection biases, side-effect induced
> placebo effects, and differences between trial and typical treatment
> practices. For every study you must make an estimate of the sign and
> magnitude of these factors in order to use the study to estimate how
> your patients may fare under the studied treatment. (In addition
> there is the crucial issue of the average effect of treatments for
> which there are no studies.)
### Fraud could be used to explain anything but do you think it is
sufficient to explain away e.g. the reports of the effectiveness of kidney
dialysis? Regression and selection biases do not affect RCT's, which should
also take care of missing regression factors. If the side-effect induced
placebo effect was a major force, then similar medications with more
side-effects would be found consistently more effective than the ones
without such side-effects - this is not the case in the comparisons of e.g.
COX-2 inhibitors with non-specific COX inhibitors, or tricyclics with SRIs',
and many others.
Publication bias may account for failure to publish studies showing lack of
effectiveness of existing treatments but it is quite unlikely to hide
deleterious effects of accepted treatments. There are large rewards for
publishing studies pointing out dangerous side effects.
The only form of confounding factor you mentioned that could have a large
effect on average efficacy of medicine is the difference between trial and
typical practice, although again, the difference could mostly lead to lack
of effect rather than deleterious effects, at least in non-interventional
specialties. Most physicians tend to do less than necessary to treat,
although they frequently overdo diagnostics.
To summarize, confounding factors are mostly taken care of by correct trial
design, and multiple independent trials, sufficient to arrive at conclusions
which I on average tend to trust. The totality of these predominantly
interventional trials is at variance with the observational studies of
aggregate efficacy of healthcare spending on which you are building your
position. Since the observational trials are subject to significantly larger
uncertainties, and there are few of them, I have no difficulty choosing the
the group to believe.
It seems that unless you have specific evidence suggesting such a
> problem is present, you assume the study has no such problem. That
> is what I meant by giving them the benefit of the doubt.
### Since I dismiss biases as not anywhere close to be a reason to doubt
medical efficacy studies in general and RCTs in particular (which is
different from studies of financial advisers, as analyzed above), indeed I
do not doubt them unless I have specific reason to distrust them.
Now, you seem to be in the opposite situation - you believe you do have an a
priori reason (i.e. the Rand study) to disbelieve medical studies but
can you also give some specific examples of studies you distrust? What about
e.g. ALLHAT and DATATOP trials? What do you think is wrong with them,
Let me note that you seem to be giving a pretty big benefit of the doubt to
the Rand study, and this despite it being rife with deficiencies visible
even to my poorly trained eye. Are you giving such benefit of the doubt to
every observational study of healthcare effects? Do you think that
statisticians who vouch for the validity of individual, interventional
clinical efficacy studies are all inept or fraudulent, while the ones who do
aggregate observational studies are somehow closer to the truth?
> >Now, let me ask you a question: What procedure did you use to arrive
> >at your present relative weighting of contradictory evidence
> >regarding medicine, evidence where hundreds of thousands of largely
> >concordant studies (animal, human, observational, interventional)
> >united by a common theoretical background (life sciences,
> >statistics) are contrasted with a few dozen observational (and one
> >flawed interventional) studies with multiple confounding factors?
> All the clinical studies have multiple confounding factors too. I
> don't see the relevance of a common theoretical background, and the
> number of studies is far less important than the likely biases in
> those studies.
### I disagree here. The degree of confidence you can have in a clinical
study is dependent on the degree of consistency with a large body of
experimental data from life sciences. If a drug is shown to reduce blood
pressure in animals, and is safe in animals, it adds to the reliability of
the clinical study. If animal models of hypertension show increased
mortality which is controlled by the drug, then reports of decreased
mortality in hypertensive humans are easier to accept as well.
The number of studies is quite important as well, since in most cases
additional studies are applying interventions in varied circumstances and
with modifications, in addition to increasing the raw numbers of observed
patients. If you have a dozen ACE inhibitors, all of which are reported by
various groups to lower blood pressure and reduce the risk of stroke, using
various paradigms (observational, interventional), both industry and
public-funded, then to reject the conclusion that ACE inhibitors prevent
stroke you need to postulate a bias acting uniformly in many different
And this brings me to the word "likely" that you used in reference to
biases. You infer the existence of a general bias to report favorable
results and to suppress unfavorable ones, affecting hundreds of thousands of
scientists working in various settings, and you confidently predict (by
using the word "likely") that the bias is large enough to hide deleterious
medical treatments sufficient to overcome the combined effects of all
interventions shown to work.
Here is a short and non-exhaustive list of beneficial medical interventions:
Opiates for pain
Catherization of obstructed bladder
Disimpaction of an impacted bowel
Insulin for diabetes type I and for some forms of diabetic coma
Thiamine for Wernicke-Korsakoff syndrome
Vitamin B12 for subacute combined degeneration
Vitamin C for scurvy
Vaccination for smallpox, tetanus, etc.
Thyroid hormones for hypothyroidism
Thyroid ablation for hyperthyroidism
Chemotherapy for seminoma
Antibiotics for gastric ulcers
Dopaminergics for Parkinson's disease
Antiseizure medications for epilepsy
Surgery for spinal stenosis
Triptans for migraine
The list could go on.
Now, you may be convinced that all the allegations of benefits of the above
interventions are due to fraud or ineptitude of the statisticians who vetted
the study designs but you need to point out specific problems with the
majority of the above examples before you can say you made your case.
Alternatively, you have to provide examples of deleterious medical
interventions that cancel the benefits of the above examples.
Can you do either?
For clinical studies the probable sign of most complicating factors
> is to make treatments look more beneficial than they actually
> are. After all, most studies are funded or run by people who are
> trying to make a treatment look good. So fraud, selection effects,
> and treatment differences are likely to overestimate benefits. What
> is less clear is the magnitude of those effects.
> The observational and experimental studies on the aggregate health
> effects of medicine are also mostly funded and run by people who want
> medicine to look effective. They are mostly embarrassed and
> disappointed by their findings of no effect. So the likely sign of
> bias is in the same direction, suggesting medicine has even lower
> benefits than they find. Observational studies may indeed by missing
> important controlling factors, but I don't see a reason to expect any
> particular sign for this bias. And for these studies one needs no
> assumption about the average effect of treatments for which there are
> no studies - all treatments are included in the data.
### You still didn't tell me how you weigh the evidence: on one side
hundreds of thousands of independent yet mutually supportive results,
obtained under conditions where confounding factors can be largely excluded
(RCTs, lab studies), on the other side a few dozen observational studies of
aggregate outcomes of spending, where confounding factors are myriad. I feel
that your above divagations on the likely wishes and disappointments of the
respective groups of scientists are somewhat wanting, especially in the
quantitative sense. Essentially you are basing your case on psychology and
non-quantitative allegations of bias.
Give me some numbers - for every proven intervention give a disproof, or a
counterbalancing harmful intervention. Otherwise you remain unconvincing.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat