[ExI] asteroid defense

Anders Sandberg anders at aleph.se
Tue Mar 1 13:19:56 UTC 2011


spike wrote:
> To clarify, I was suggesting a temporary closing of the season on
> libertarian topics, not asteroids.  Asteroid defenders, go for it.
> Libertarians, hold yer fire please.
>   

Except for our privately funded anti-asteroid cannons, of course!

Some modeling aloud:

Consider the problem of funding an anti-asteroid mission by the "street 
performer protocol" ( 
http://www.schneier.com/paper-street-performer.html ): it needs T units 
of money, and people put in money in escrow until the sum t reaches T, 
wherupon the plan gets implemented (otherwise, nothing happens). Each 
individual  i has a utility u_i (distributed according to the 
probability distribution f(u)) of the project succeeding. They think it 
is worth donating if the chance of it succeeding (both getting the money 
and stopping the asteroid) times their utility is greater than the cost 
to them. The cost is either the donated money if the project goes ahead, 
or the loss of interest they could have gained from having the money in 
their own account, if the project does not happen.

The "game" looks like this:

                      Happens      Doesnt happen
Donate                u_i-c      -kc
Don't donate         u_i         0

c is the amount of donated money, k represents the small cost of not 
having the money for a while.

The chance of the project succeeding is a function of the already given 
donations: P_i(t), which we can assume is some kind of sigmoid function 
that has a particular asymptote P_i(infinity) corresponding to the 
individually estimated chance of the mission succeeding, an inflection 
point somewhere below T and a sharpness of the transition from "not 
enough money to convince everybody else" to "enough money to convince 
everybody else". The utility of donating is P_i(t)(u_i-c) - 
kc(1-P_i(t)), the utility of not donating is P_i(t)u_i. Since the first 
is always larger than the second, we have a problem - no rational player 
should want to donate, everybody waits for everybody else.

How can this be fixed? The classic approach is to say the government 
will do it. But governments are just like players in this game, and 
hence China will wait for the US to save them, and so on. A 
non-libertarian approach would be to add a penalty term to the don't 
donate case: everybody who didn't get involved gets punished (maybe the 
project drops small meteors on them). This is of course morally 
problematic, but typical for governments trying to tax us. Removing k by 
giving back the interest on the escrowed money doesn't change things.

What if k was negative? In this case you would get back more money for 
showing your cooperation even if the project fails. Now it would be 
rational to donate for some people, making t increase and thus enticing 
more people to donate. People with k/(1+k) > P_i(t) would want to donate 
- the people who think that "this cannot be done". If they are right, 
they get rewarded. If they are wrong, everybody gets rewarded by u_i but 
they get mildly punished for their pessimism. The nice thing is that as 
soon as the total sum of donations start to increase, you can get 
positive feedback since the optimists who think it can be done if enough 
people donate will now start to think that it has a decent chance (they 
are still rationally speaking irrational since they are paying rather 
than being free riders, but we should recognize that there is a bit of 
altruism in humans).

Imagine that you are a charity or company trying to make this happen. 
You have your own etimate of P(t) and the distribution of people's 
views. If you think people are *too pessimistic* about the feasibility, 
then this strategy seems to be a very good one. You get the large pool 
of pessimists to invest in the project (they are of course gambling that 
they are right and you are wrong; assuming they are having normal 
cognitive biases they will be overconfident), and then have enough 
capital to get optimists to get in too. If your project fails you have 
to pay kt extra back to the participants, if it succeeds you will reap 
the rewards for the project (which can be more than just the utility of 
saving the earth: you might have a designated profit margin of the 
capital, for example).


[ A simpler model which is fun to play around with is just to assume 
people pay u_i P_i(t), topping up to this level as t changes. I have run 
various simulations of this.  This produces a phase transition between 
nobody/few pays and a lot of people pay as N increases, with a sharpness 
dependent on the sharpness of the P_i functions. In fact, sharper 
changes from "it wont work" to "it will work" increases the number of 
runs where the charity fails even at high N. There is also a strong need 
for early adopters: one can imagine people as a queue from most 
pessimistic to least pessimistic. For every optimist that donates at the 
right end of the queue, the boundary between non donors and donors tends 
to shift a bit to the left.  The above incentivizing of having 
pessimists donate achieves this by getting a bunch on the left to donate 
too. 

Depending on the settings of c, k and the other parameters the 
incentivizing model also produces some nonmonotonic behavior. For 
example, if c=10, T=100, k=-10% then for a small group everybody 
donates, secure in the knowledge that the target will not be reached and 
they will make money. As the group gets larger, the pessimists drop out 
and the total funding decreases, until the group becomes so large that 
optimists start to have an effect.

An important thing is that if c is small enough, then it is much easier 
to get the transition to happen, again because of more early adopters 
bringing up t. In fact, this might be a bit of a problem in general: 
people will never pay if u_i << c, and given that proactive defense will 
have low utilities (a one in 20,000 lifetime risk) donations might be 
very small. ]


Any thoughts?


Appendix: a matlab script for running the sim

T=100; % money needed for project

ff=[];
for p=10:1000
    N=p; % Run population size from 10 to 1000

    u=1+randn(N,1); % utility of asteroid defense
    Pinf = rand(N,1); % prob of defense working
    theta = rand(N,1)*T; % inflexion point for prob estimate
    g=1*rand(N,1); % steepness of inflection

    t=0; % money collected
    told=-1;
    c=.21; % payment size
    k=0.1; % cost of donating but not getting defense

    while (t-told>1e-3)
        told=t;
        P=Pinf.*(.5+.5*tanh(g.*(t-theta))); % how likely is success?
        worthpaying = max(0,(u-c).*P-(1-P)*c*k); % what is my utility of 
paying
        t=sum(worthpaying>0)*c; % pay c if it is positive
    end

    ff=[ff t];
    plot(10:N, ff,'.')
    drawnow
end
ylabel('Total donations')
xlabel('N')

Anders Sandberg,
Future of Humanity Institute 
James Martin 21st Century School 
Philosophy Faculty 
Oxford University 




More information about the extropy-chat mailing list