[Paleopsych] Slate: Risky Business - A book tries, and fails, to quantify catastrophic risks
Premise Checker
checker at panix.com
Wed Jan 19 15:16:46 UTC 2005
Risky Business - A book tries, and fails, to quantify catastrophic risks.
By Jeffrey Rosen
http://www.slate.com/id/2109600
Risky Business
A book tries, and fails, to quantify catastrophic risks.
By Jeffrey Rosen
Posted Monday, Nov. 22, 2004, at 6:16 AM PT
Book cover
Richard Posner's Catastrophe: Risk and Response was inspired, he says,
by Margaret Atwood's 2003 novel Oryx and Crake, set in the near
future, which imagines the extinction of the human race in a world
menaced by bioterrorism and uncontrolled technological advance." He
hastens to assure his readers that he hasn't become "an apocalyptic
visionary," but a peculiar unreality suffuses his book, which proves
unexpectedly blind to the real threats we face.
By "catastrophic risk," Posner means risks that have a low probability
of materializing but are likely to create nearly unimaginable harms if
they do. He believes they are real and growing. His examples include
falling asteroids that could wipe out a quarter of the earth's
population within 24 hours; global warming, which could cause floods
("Harvard gone the way of Atlantis") followed by "snowball earth";
nanotechnology that could envelop the earth in "gray goo";
bioterrorism; and "superintelligent robots" that "may kill us, put us
in zoos, or enslave us, using mind-control technologies to extinguish
any possibility of revolt, as in the movie The Matrix."
Posner's thesis when discussing these emotionally laden subjects is as
deadpan as his prose: "[T]he tools of economic analysis--in particular
cost-benefit analysis--are indispensable to evaluating the possible
responses to the catastrophic risks." Unfortunately, assigning precise
numerical weights to the costs and benefits of preventing catastrophic
risks is a daunting challenge, and Posner's attempts to do so are
numbingly technical and ultimately unsatisfying. In the end, balancing
liberty and security involves disputed questions of value rather than
precisely quantifiable facts--questions that must be resolved not by
experts but by politics.
Consider the possibility that atomic particles, colliding in a
powerful accelerator such as Brookhaven Lab's Relativistic Heavy Ion
Collider, could reassemble themselves into a compressed object called
a stranglet that would destroy the world. Posner sets out to
"monetize" the costs and benefits of this "extremely unlikely"
disaster. He estimates "the cost of extinction of the human race" at
$600 trillion and the annual probability of such a disaster at 1 in 10
million. These figures are "arbitrary," he acknowledges, because it is
impossible to calculate the real probability of a stranglet disaster,
and scientists who have attempted to do so have been attacked for
politicizing the question. His attempts to calculate the benefits of
developing the RHIC are also arbitrary because there's no impartial
way to calculate the value the public assigns to research in particle
physics. After elaborate calculations based on arbitrary figures, he
suggests that perhaps the costs and benefits can't be precisely
monetized: Congress, or the public, could be told instead that "there
is one chance in 10 million of a world destroying accelerator accident
that could be avoided by closing down RHIC at a cost in benefits
forgone estimated at $2.1." What then, is the point of the elaborate
calculations?
Posner argues that even arbitrary figures could promote reasoned
decision-making that might close down some of the most dangerous
research in RHIC. But if the decision is made by democratic bodies
accountable to the public, this may be too optimistic. Behavioral
psychologists have found that the public tends to make judgments about
risk based on emotional feelings about whether something is good or
bad, safe or dangerous, rather than on a dispassionate calculation of
costs and benefits. Paul Slovic of the University of Oregon, for
example, has argued that risk is a subjective concept that has
different meanings to different citizens--some focus more on the low
probability of particular threats, others on the potential severity,
still others on the possibility that children might be harmed.
These differences in risk perception can only be resolved through
political negotiation. But democratic politics is an enterprise for
which Posner has contempt. He is addicted to the rule of experts, and
he proposes a series of arid and (for a self-styled pragmatist)
surprisingly impractical policy solutions for applying cost benefit
analysis to risk calculation: a "science court" of experts that would
review dangerous government research projects; the creation of an
international environmental protection agency to enforce a modified
Kyoto Protocol under the auspices of the United Nations; a federal
review board that would forbid any scientific research that poses an
"undue risk" to human survival. Few of these proposals have any
realistic chance of being adopted in America. And even if they were
adopted, public emotionalism would continue to demand irrational (or
as the behavioral psychologists say, "quasi-rational") allocations of
resources that would thwart the experts' recommendations. Although
Posner promises to monetize the costs of these psychological and
political impediments, he fails to do so.
Even if Posner's proposals could be imposed by judicial fiat, which
they can't and shouldn't, they seem underwhelming on their own terms.
In a surprising hole at the end of the book, Posner declines to offer
practical examples of how cost-benefit analysis could cast precise
light on the very real terrorist threats that menace us. Consider the
possibility of biological terrorism. Posner argues plausibly that the
government should balance the costs of abridging civil liberties
against the benefits of preventing terrorist catastrophies. He
correctly criticizes some civil libertarians for failing to calculate
these costs and benefits. But he then proves unable to calculate them
himself--and dismisses those, including me, who have argued that the
public tends to overestimate the likelihood that they will be
personally harmed by especially frightening forms of terrorism that
are easy to visualize.
After 9/11, in fact, respondents in a poll perceived a 20 percent
chance that they would be personally hurt in a terrorist attack within
the next year. These predictions could have come true only if an
attack of similar magnitude to 9/11 occurred nearly every day for the
following year. Although the actual probability of terrorist attacks
is impossible to measure, nothing in al-Qaida's history suggests
anything like the capacity to produce 9/11-scale attacks on a daily
basis.
Posner also insists that we should calculate the costs to liberty and
privacy of extreme police and military measures, such as torture, and
the likelihood that these extreme methods would, in fact, increase
security. But he then proves unable to calculate these costs and
benefits as well. "I have no idea whether [torture] is necessary," he
says after a long digression on the hypothetical benefits of torture,
and the effort to monetize the benefits of privacy similarly defeats
him.
It may be possible, in fact, to attempt to calculate the costs and
benefits of some security technologies with more precision than Posner
offers. Consider the government's original proposal, called Total
Information Awareness, to use data-mining at airports to determine
whether individual travelers had consumer and travel patterns that
resembled the 19 hijackers of 9/11. After the system was proposed,
libertarian critics, using cost-benefit analysis, pointed out the
great danger of false positives: Even if the system were 99 percent
accurate in identifying terrorists, a 1 percent error rate applied to
300 million travelers would mean that 3,000,000 (that's .01 x 300
million) of those identified as potential terrorists would be wrongly
identified. But if we assume that the next attack will look nothing
like the last one, a data-mining system that looked for passengers who
took flying lessons in Florida, for example, is more likely to have
something closer to a 1 percent accuracy rate. Such a system would
falsely accuse nearly all innocent travelers of being terrorists and
correctly identify only a fraction of terrorists while missing nearly
all of the real terrorists. No rational evaluation of costs and
benefits would support such a system, which is why the government
correctly abandoned Total Information Awareness and replaced it with a
system designed to verify a traveler's identity rather than model
suspicious behavior.
Posner does not describe the successful attempt by civil libertarians
to lobby against badly conceived security technologies by applying the
methods of cost-benefit analysis because actual political debates have
no place in his elaborate models of catastrophic risks. He wants to
reorient legal education to produce polymaths like himself, requiring
law students to demonstrate "basic competence" in math, statistics,
and science so that they could replicate his Herculean feats of
interdisciplinary synthesis. Specialists in various disciplines may
benefit from the collaborative research projects that Posner usefully
outlines. But the greatest challenges that menace us cannot be
precisely quantified by science; they are psychological and political.
Jeffrey Rosen is a law professor at George Washington University and
legal affairs editor of The New Republic. His new book is The Naked
Crowd: Reclaiming Security and Freedom in an Anxious Age.
More information about the paleopsych
mailing list