[ExI] Planetary defense
Richard Loosemore
rpwl at lightlink.com
Thu May 5 13:48:53 UTC 2011
Anders Sandberg wrote:
> I would like to hear your input: what are the best approaches for
> dealing with the "billion body problem", the fact that human
> rationality when it comes to risk tends to be fairly... problematic. How
> do we handle the bias that as long as no disaster has happened yet
> people underestimate the risk, that planetary defense is the ultimate
> public good, and that people tend to treat big threats as belonging in a
> fiction category that are fun to think about but not act on?
Although I am concerned about the fact that human rationalilty is
problematic, when it comes to risk assessment, I am even more concerned
with a meta-aspect of this problem: the irrationality of those who say
they are studying the risks.
Which is to say, people who *assess* risk behave irrationally in the
face of quantifiable versus non-quantifiable risks. When they can put
numbers on a risk, they love to study the heck out of out it (even if,
in fact, it is not that important), because playing games with numbers
and equations makes the risk-scientist have a warm feeling that they are
doing something.
But if, on the other hand, the risk scientist can't put numbers on a
certain category of risk, that means it is not much fun to play with, so
she tends to downplay or ignore it.
Case in point: planetary defense. Plenty of scope for mathematical
analysis.
By contrast, conside the risk of complex civilisational collapse (due to
a cluster of interacting factors too large to name here) that occurs
before a technology is found that will enable survival of the end of
easy energy. This -- which is probably a billion times more likely, in
the next fifty years, than asteroidal obliteration of all planetary life
-- contains little scope for analysis. So it receives much less attention.
Another example, which I consider far more dangerous, is the debate over
AI safety. Since AI (if it could be done safely) would have the
potential to solve many of these problems (both civilisational collapse
and asteroidal impact), it begs to be given priority consideration.
However, those who study the risks of AI seem to be almost obsessed with
doing one of two things:
a) Calculating what can be calculated (pursuing logical proofs of
friendliness, or the impossibility of such proofs), or
b) Investigating abstract mathematical "AI" theories that (for
example) entail computing systems with resources exceeding the size or
lifetime of an infinite numbers of universes, and with no clear way to
relate these to real implementations that actually work.
Both of these lines of research are immense fun, if you love
mathematics. But in the face of the practical issue of getting to a
solution, they are worse than useless.
I think that the best thing that could happen right now is for the risk
scientists to stop and look at themselves for a while. Try to
understand their OWN biasses first, then, when they've got a grip on
that, get back to looking at everyone else's.
Richard Loosemore
More information about the extropy-chat
mailing list