[extropy-chat] Failure of low-fat diet

Hal Finney hal at finney.org
Wed Feb 22 19:14:17 UTC 2006


> Keith Henson writes:
> At 06:47 PM 2/21/2006 -0500, Robin Hanson wrote:
> > Every study has flaws one can point out.   The point is that people are
> > quick to use the flaws to dismiss studies whose conclusions they don't
> > like, and they hardly notice the flaws of the studies whole conclusions
> > they do like.  That habit allows one to pretty much ignore the evidence
> > in favor of preconceived expectations.
>
> Just like:
>
> "None of the circuits involved in conscious reasoning were particularly 
> engaged," Westen said. "Essentially, it appears as if partisans twirl the 
> cognitive kaleidoscope until they get the conclusions they want, and then 
> they get massively reinforced for it, with the elimination of negative 
> emotional states and activation of positive ones."
>
> "Notably absent were any increases in activation of the dorsolateral 
> prefrontal cortex, the part of the brain most associated with reasoning."

That was a good study that Keith is quoting, from here:
http://www.rxpgnews.com/specialtopics/article_3287.shtml
The actual study is not public yet, but should be available from here
eventually:
http://www.psychsystems.net/lab/type4.cfm?id=400&section=4&source=200&source2=1

It reminds me of research I read about last year.  In that study,
subjects were given examples of data reports that either confirmed or
contradicted their ideological/political beliefs.  Researchers were trying
to learn about the mechanisms by which ideologies defend themselves.
A hypothesis was that people would ignore contradictory information and
focus on confirmatory data.  However, the results were the opposite.

Instead, people tended to take in the confirmatory data reports
uncritically, with a "yes, of course" attitude.  However, the data that
contradicted their beliefs was subject to intense scrutiny.  Every detail
was examined to look for loopholes, caveats, exceptions and other flaws.
Since every study is imperfect, subjects were able to find and focus on
these problems, and discredit the report in their own minds, allowing
them to cling to their pre-existing beliefs.  The result was that they
spent much more attention and mental energy on the contradictory reports
than the confirmatory ones.  (And indeed, in debates here and elsewhere,
note how much more time people spend attacking the other person's position
than explaining the validity of their own.)

The report Keith is citing was a little simpler, in that people
were apparently exposed to relatively short data sets which showed
contradictory or hypocritical behavior on the part of politicians
they favored or opposed.  This would probably give them less to go on
in terms of looking for explanations or excuses when their favorite
candidate was shown in a bad light.  Otherwise I would have predicted
that there would be intense rational thought going on in examining those
reports, as people searched for flaws.  Without enough data to resolve the
contradiction, I would hypothesize that people would mostly feel intense
frustration, convinced that there must be some reason that would explain
it away, and unhappy that the researchers are withholding exculpatory
data and instead intentionally showing their favorite candidate in
an unjustifiably bad way.  This could explain some of the emotional
reactions the researchers discovered.

These MRI studies are a great new window into how people think.  I'm sure
we will see much more research along these lines, as the machines are
becoming widely available.  Taking that earlier study I mentioned and
repeating it under an MRI machine would be an interesting test.

Keith goes on:
> Now to raise this to a meta level, why do people get stuck on preconceived 
> expectations?

That's a much harder problem.  Just to take illustrate the difficulty,
I think the first thing you need to decide in looking for an explanation
is whether this behavior is a good idea or not.  The class of explanations
you will explore will depend fundamentally on whether you see this as a
reasonable heuristic for a rational observer with potentially stringent
bounds on his available computational capacity; or whether you see it
as irrational behavior in terms of getting at the truth, but justified
in terms of other benefits, such as social advantages.

I couldn't even answer this basic question, whether this behavior makes
sense in terms of getting at the truth of things.  I can see arguments
either way on that one.  So I won't even stick a toe in the water of
possible explanations.

Hal



More information about the extropy-chat mailing list