[ExI] Planetary defense

Kelly Anderson kellycoinguy at gmail.com
Thu May 5 19:07:26 UTC 2011


On Thu, May 5, 2011 at 6:39 AM, Anders Sandberg <anders at aleph.se> wrote:
> I am working on my speech for the IAA Planetary Defense conference ("From
> Threat to Action"). It is such a cool name; the fact that it is held in the
> Bucharest Palace of the Parliament (just google this monstrosity) fuels my
> imagination of broadshouldered science-generals standing around a giant
> holodisplay screen moving missiles and lasers into the right orbits to meet
> the invaders... Of course, the real topics will be far more mundane -
> photometry, orbit estimation, mission planning, international coordination
> and so on. But it is real work on reducing a global catastrophic risk, which
> is terribly cool.
>
> I would like to hear your input: what are the best approaches for dealing
> with the "billion body problem", the fact that  human rationality when it
> comes to risk tends to be fairly... problematic. How do we handle the bias
> that as long as no disaster has happened yet people underestimate the risk,
> that planetary defense is the ultimate public good, and that people tend to
> treat big threats as belonging in a fiction category that are fun to think
> about but not act on?
>
> And from a transhumanist perspective, assuming accelerating technological
> progress, might it not be smart to wait when you detect a threat in X years
> time, since in a few more years we will have far more tech to deal with it?
> How do you make a rational estimation of when you should strike, given
> uncertainties in how anti-threat technology will develop plus the (known)
> increase in difficulty in deflecting threats later?
>
> Another interesting aspect is that power-law distributed disasters with
> probabilixy P(x) = x^-a (x is disatster size, a>1 is an exponent) have
> infinite expectation is a<2 - sooner or later one is going to be much larger
> than you can handle. Even for a>2 much of the expected losses over time come
> from the extreme tail, so it is rational to spend more on fixing the extreme
> tail than stopping the small everyday disasters. But this will of course not
> sit well with taxpayers or victims. Should we aim for setting up systems
> that really prevent end-of-the-world scenarios even at great cost, or is it
> better to have mid-range systems that show action occasionally? Even when
> they fail utilitarian cost-benefit analysis and the MaxiPOK principle?
>
> And watch the skies: *
> http://www.universetoday.com/85360/take-a-look-huge-asteroid-to-fly-by-earth-in-november/

Some people seem more than willing to spend countless trillions of
dollars resolving or just mitigating global warming. Compare almost
any risk to humanity to global warming in terms of a cost risk
analysis, and you can make a really good case for addressing it (vs.
global warming). It's a powerful way to make your point, I think.

Global warming may be catastrophic, but even under the most alarming
scenarios, nobody sees global warming as an extinction event for human
beings. Yet an asteroid strike of sufficient size is just such an
event. What do the mathematical models you use have to say about
climate change, and how does the response to that compare to the
response to asteroid detection and mitigation?

Just a thought.

-Kelly




More information about the extropy-chat mailing list