[ExI] Planetary defense
Anders Sandberg
anders at aleph.se
Thu May 5 12:39:13 UTC 2011
I am working on my speech for the IAA Planetary Defense conference
("From Threat to Action"). It is such a cool name; the fact that it is
held in the Bucharest Palace of the Parliament (just google this
monstrosity) fuels my imagination of broadshouldered science-generals
standing around a giant holodisplay screen moving missiles and lasers
into the right orbits to meet the invaders... Of course, the real topics
will be far more mundane - photometry, orbit estimation, mission
planning, international coordination and so on. But it is real work on
reducing a global catastrophic risk, which is terribly cool.
I would like to hear your input: what are the best approaches for
dealing with the "billion body problem", the fact that human
rationality when it comes to risk tends to be fairly... problematic. How
do we handle the bias that as long as no disaster has happened yet
people underestimate the risk, that planetary defense is the ultimate
public good, and that people tend to treat big threats as belonging in a
fiction category that are fun to think about but not act on?
And from a transhumanist perspective, assuming accelerating
technological progress, might it not be smart to wait when you detect a
threat in X years time, since in a few more years we will have far more
tech to deal with it? How do you make a rational estimation of when you
should strike, given uncertainties in how anti-threat technology will
develop plus the (known) increase in difficulty in deflecting threats later?
Another interesting aspect is that power-law distributed disasters with
probabilixy P(x) = x^-a (x is disatster size, a>1 is an exponent) have
infinite expectation is a<2 - sooner or later one is going to be much
larger than you can handle. Even for a>2 much of the expected losses
over time come from the extreme tail, so it is rational to spend more on
fixing the extreme tail than stopping the small everyday disasters. But
this will of course not sit well with taxpayers or victims. Should we
aim for setting up systems that really prevent end-of-the-world
scenarios even at great cost, or is it better to have mid-range systems
that show action occasionally? Even when they fail utilitarian
cost-benefit analysis and the MaxiPOK principle?
And watch the skies: *
http://www.universetoday.com/85360/take-a-look-huge-asteroid-to-fly-by-earth-in-november/
*
--
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University
More information about the extropy-chat
mailing list