[ExI] for the fermi paradox fans

Anders Sandberg anders at aleph.se
Sun Jun 15 22:29:19 UTC 2014


More thinking about the adversarial case:
The adversaries A and B want to calculate something, and their utilities UA and UB are monotonic functions of the amount of resources they get. (If we want to complicate things we may consider situations where utilities depend on what the other guy gets, but let's keep things simple). Each starts with an amount of resources RA and RB. 
The value of resources can change. Consider the Landauer model: temperature declines as exp(-Ht), which means that the value of a Joule increases as exp(Ht)! At least until we hit the horizon radiation temperature and it stays constant. If A has discount rate a, the value of a Joule t units into the future is exp((H-a)t). So if discounting is faster than the universe cools, A will want to burn resources *now*. If a is slower than H, then A will want to save until the horizon temperature era: using resources today is wasteful. 
If A and B can change their rate of subjective time, then discount factors presumably change in the same way (in fact, the clockspeed is likely proportional to resource usage and vice versa). If A is naturally short-term, it might still want to slow down its clockspeed for a few trillion years and then start gorging in the dark era since it will - subjectively - just get a lot more resources in the short term. So everybody will want to go to the far era... unless their utility levels off very fast (a being that only wants to calculate the googoolth prime does not benefit from resources beyond what is needed for this). So A and B are a bit concerned about those finitist beings who would want resources early and then stop - and the other guy! Just because both A and B want to use the resources later doesn't mean they will not try to grab them early.
There is an interesting physics of grabbing (kleptophysics?), I suspect. Can resources be stolen with an expected profit, or will it be zero or negative sum? Scorched earth situations makes grabbing negative-sum. Moving mass around incurs a rocket-equation cost of m(exp(v/k) -1) kg (where m is the grabbed 'payload', v the velocity and k isp - let's not do the relativistic case, it is even worse!). So the value today of stealing mass m to A will be proportional to UA(m)*exp((H-a)(d/v)) where d is the distance to move it. The cost to B if A steals the mass now is UB(m(exp(v/k)-1)). Suppose these U are just linear and identical. Then the loss to B is much larger than the gain to A. If A is short-term, then it will also want to use a big v, making things even more wasteful. Long-term civilizations are content to let the loot drift to their treasure pile over cosmological times, although presumably it might now be vulnerable (if not to stealing back, then at least to 'if I can't have this star, nobody will!' BOOM!) So it seems that unless UB is pretty convex, it might be rational for B to do scorched earth against A. The situation for energy might be tricky; Eric Drexler has mentioned some clever long-term storage ideas that might remove the exponential nastiness of the rocket equation and would hence move things closer to a zero-sum situation. More research is needed.
So if A and B do not care about each other other as potential threats to their resources, then it looks rational to try to negotiate an equilibrium. I guess this is totally standard economic game theory I cannot do at 23:21 in the evening. I suspect the end of the story is that they make a binding deal, leave each other alone or merge their resources and utilities, and live happily ever after. 
Also, accelerating expansion of the universe means that after a certain time their domains will be losing causal contact: they do not have to deal with each other eternally.
Basically, I think this points towards external rational adversaries being relatively manageable. Singleton civilizations also prevent internal adversaries. The problem is the case where irrational adversaries are around (burning cosmic commons, the google-prime calculator), the case of utility functions that depend on each other (zorgons love to simulate humans in agony; our utility is decreased by theirs), and non-singleton civilizations where internal adversaries evolve. Have I missed anything?

Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140616/ab56c54c/attachment.html>


More information about the extropy-chat mailing list