<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 07/10/2012 08:48, Tomaz Kristan
wrote:<br>
</div>
<blockquote
cite="mid:CALMrAFStKCKf_OYSa6_7vtUR-E4F0DxoJvseHzVVgrg4S02Y0Q@mail.gmail.com"
type="cite">
<meta http-equiv="Context-Type" content="text/html;
charset=ISO-8859-1">
Anders Sandberg said:<br>
<br>
> More seriously, Charlie makes a good point: if we want to
make the world better, it might be worth prioritizing fixing the
stuff that makes it worse according to the damage it actually
makes.<br>
<br>
No, it is not good enough. Not wide enough. You should calculate
also with the stuff which currently doesn't do much damage, but it
would, if it has a chance.<br>
</blockquote>
<br>
Yes. As hinted in my section on xrisk, this is actually one of the
big research topics at FHI. <br>
<br>
Extreme tail risks do matter, and can sometimes totally dominate the
everyday risks. For example, suppose on average X% people die from
cancer each year, with a bit of normal distribution noise. Also
suppose pandemics kill people according to a power law distribution:
most years a handful, but occasionally a lot do. Then it turns out
that if the power law exponent is between -1 and -2 the average
diverges: wait long enough and a sufficiently big pandemic will wipe
out any number of people. So if you try to reduce the expected
number of deaths per year, the pandemic risk is far more important -
even if the typical incidence rate is far, far lower than those X%.
Same thing for wars, democides and maybe agricultural crashes. (The
fact that there is just a finite number of humans complicates the
analysis in interesting ways. Paper coming up.)<br>
<br>
But not all power law tails matters. Asteroid deaths have an
exponent that is so negative that the expectation does not diverge,
and the rate of deadly impacts is low. So fixing other threats
actually have higher priority (which is almost a shame, since it
would be great to have asteroid defence as a motivation for space
colonisation). <br>
<br>
And then there is the question of unprecendented risk. How do you
estimate it? Are there rational ways of handling threats that have
not existed before, and where we know we lack information? Some very
interesting problems there that we try to get funding for.<br>
<br>
<blockquote
cite="mid:CALMrAFStKCKf_OYSa6_7vtUR-E4F0DxoJvseHzVVgrg4S02Y0Q@mail.gmail.com"
type="cite">
This line of reasoning is not very wise, sorry.<br>
</blockquote>
<br>
Wisdom is the ability to figure out what questions we ought to
solve. Figuring out how to prioritize the big problems in the world
and why we go wrong with it seems to be nearly the definition
applied wisdom...<br>
<br>
But strangely, very few people researched it until about a decade
ago. It is still a very small research field. I think that is a
pretty impressive example of the collective folly of our species. <br>
<br>
<br>
<pre class="moz-signature" cols="72">--
Anders Sandberg,
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
Oxford University </pre>
</body>
</html>