[extropy-chat] The Proactionary Principle: comments encouraged on almost-final version
Eliezer S. Yudkowsky
sentience at pobox.com
Tue Nov 8 21:24:46 UTC 2005
Hal Finney wrote:
>
> Nevertheless I couldn't help recalling our discussion last month
> initiated by Robin Hanson, on the utility of scenario-based forecasting.
> (Thread title was "Inside Vs. Outside Forecasts".) Some of the advice
> in the proposed document amounts to creating inside-type forecasts,
> i.e. setting up scenarios, looking at probable outcomes, and making
> decisions on that basis. The paper we discussed last month shows that
> this forecasting methodology is not very good, unfortunately. It is
> prone to cognitive biases of many kinds.
Correct. I name also an additional cognitive bias: defensibility.
Cost-benefit analyses aim at warding off anxiety about catastrophe, or
blame in the event of catastrophe. Warding off actual catastrophe is a
great deal harder. You do not realize this until you have written a
careful, elaborate analysis of risks and benefits (such as appears in
http://singinst.org/CFAI/policy.html) and then it turns out that Nature
would have gone ahead and killed you anyway, even though you'd conducted
a cost-benefit analysis. How unreasonable of Nature! What more does
She want from us? At that point I first realized the incredible
difficulty gap between fulfilling a deontological obligation to perform
a risk analysis, and actually avoiding risk. You can always perform a
risk analysis - it requires merely that you quantify your ignorance.
There's no guarantee that survival is even possible - this requires
nonignorance, and nonignorance can be arbitrarily difficult to obtain.
It is in the nature of deontological social obligations that they tend
to be fulfillable, which tells you something about their distance from
the real world.
George Orwell wrote: "In our time, political speech and writing are
largely the defense of the indefensible. Things like the continuance of
British rule in India, the Russian purges and deportations, the dropping
of the atom bombs on Japan, can indeed be defended, but only by
arguments which are too brutal for most people to face, and which do not
square with the professed aims of the political parties. Thus political
language has to consist largely of euphemism, question-begging and sheer
cloudy vagueness."
Humanity can survive the loss of a thousand people, or a million people;
it survives fifty-five million deaths every year. It is therefore
appropriate to trade off the risk of fatal side effects against probable
benefits of life-saving pharmaceuticals, to minimize net casualties.
This is the argument which is too brutal for most people to face: it
requires accepting that every now and then, even after performing a
cost-benefit analysis, the Proactionary Principle will kill a few
thousand people - loudly, visibly, in full public view. The
Precautionary Principle kills many more people, but silently.
If human beings did not age, but still suffered accidents, we would in
no sense be immortal; we would live only until one of life's many
dangers cut us down. The human species is like an unaging individual
human; it has survived this far only because there has not been *any*
significant, recurring danger of extinction. Once we enter the realm
where existential risk becomes *possible*, it imposes a death sentence
on humankind, unless the window of vulnerability is bounded, and small.
No existential risk can ever be realized, even once. It is as if you
did not age, but you were still vulnerable to all ordinary accidents,
and you absolutely had to survive at all costs. The Proactionary
Principle does not inculcate a mindset appropriate to such a task. It
is the creed of someone who can never really be hurt, as humankind can
never really be hurt by a pharmaceutical mistakenly approved.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list