[ExI] Consequentialist world improvement
Anders Sandberg
anders at aleph.se
Sun Oct 7 10:59:30 UTC 2012
On 07/10/2012 08:48, Tomaz Kristan wrote:
> Anders Sandberg said:
>
> > More seriously, Charlie makes a good point: if we want to make the
> world better, it might be worth prioritizing fixing the stuff that
> makes it worse according to the damage it actually makes.
>
> No, it is not good enough. Not wide enough. You should calculate also
> with the stuff which currently doesn't do much damage, but it would,
> if it has a chance.
Yes. As hinted in my section on xrisk, this is actually one of the big
research topics at FHI.
Extreme tail risks do matter, and can sometimes totally dominate the
everyday risks. For example, suppose on average X% people die from
cancer each year, with a bit of normal distribution noise. Also suppose
pandemics kill people according to a power law distribution: most years
a handful, but occasionally a lot do. Then it turns out that if the
power law exponent is between -1 and -2 the average diverges: wait long
enough and a sufficiently big pandemic will wipe out any number of
people. So if you try to reduce the expected number of deaths per year,
the pandemic risk is far more important - even if the typical incidence
rate is far, far lower than those X%. Same thing for wars, democides and
maybe agricultural crashes. (The fact that there is just a finite number
of humans complicates the analysis in interesting ways. Paper coming up.)
But not all power law tails matters. Asteroid deaths have an exponent
that is so negative that the expectation does not diverge, and the rate
of deadly impacts is low. So fixing other threats actually have higher
priority (which is almost a shame, since it would be great to have
asteroid defence as a motivation for space colonisation).
And then there is the question of unprecendented risk. How do you
estimate it? Are there rational ways of handling threats that have not
existed before, and where we know we lack information? Some very
interesting problems there that we try to get funding for.
> This line of reasoning is not very wise, sorry.
Wisdom is the ability to figure out what questions we ought to solve.
Figuring out how to prioritize the big problems in the world and why we
go wrong with it seems to be nearly the definition applied wisdom...
But strangely, very few people researched it until about a decade ago.
It is still a very small research field. I think that is a pretty
impressive example of the collective folly of our species.
--
Anders Sandberg,
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20121007/e1981fdc/attachment.html>
More information about the extropy-chat
mailing list