[ExI] BICEP2 and the Fermi paradox

Anders Sandberg anders at aleph.se
Fri Apr 11 18:39:28 UTC 2014


Tomaz Kristan <protokol2020 at gmail.com> , 11/4/2014 12:46 AM:
> So this predicts that a random observer should predict he is in a simulation of an early interval. 
Then, he must also predict, that his simulator is also simulated! And so on, through all the turtles/simulators?
Hehe. In my basic model where the computing power just grows without limit this would make sense.
If we assume the increase only lasts time T_omega, then the observer should expect to be between interval 1 (the earliest interval) and interval T_omega/ln(F) (the most recent). But the *average* nesting will be deep.
Estimating how deep our world is given that we know we are not doing any real ancestor sims can be done this way: in interval N, there will be 1*X sims of N-1. FX of N-2, F^2X of N-3, and so on. The total number of sims from the level is around X(1-F^N)/(1-F). So if we try to calculate the number of level 1 sims S(N) instead, we will get S(2)=X from level 2, S(3)=FX+X^2 from level 3, and S(N)=X [ S(N-1)+F S(N-2) + F^2 S(N-3) + ...] in general. Plugging in S(N)=exp(lambda N) as an ansatz, we get exp(lambda N) = X [ exp(lambda N)/exp(lambda) + F exp(lambda N)/exp(2 lambda) +F^2 exp(lambda N)/exp(3 lambda)], which gives us 1=X [ exp(-lambda) + exp(-2 lambda) + exp(-3 lambda) + ...], or lambda=-ln(1-X). So basically, unsurprisingly, the number of level 1 simulations grows exponentially over time. So we can throw in Bayes theorem to try to estimate P(N levels | we are a level 1 sim) = P(We are level 1| N levels)P(N levels)/P(we are level 1). Now, given all the above, the first term is about constant (exponentially increasing sims, with level 1 sims forming an exponentially growing subset). The second term, the prior for N, seems to be doing most of the work here: it is fairly uninformative to know that you are a level 1 sim, since you could be the only sim in a level 2 world or one of quadrillions in a very nested world (and the final term is just normalization).
So, thinking we are a sim, unless one believes computer power will grow beyond any bound, one doesn't know much about how nested it could be. That is pretty intriguing. I might have to run it past some of our anthropics wizards in the office to check my reasoning. 

Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University

 
On Fri, Apr 11, 2014 at 12:22 AM, Anders Sandberg <anders at aleph.se> wrote:
 Tomaz Kristan <protokol2020 at gmail.com> , 10/4/2014 11:18 AM: 
> But is now the true year 2014, or a simulation of 2014 run in the year 4,982,944?

Is this the true year 4,982,944 with the simulation of 2014, or just a simulation of all that, run in the year 4,982,945?
 
I am not saying this (infinite) regression kills the probability of ancestral simulation going on. It weakens it.
Suppose the amount of available computing power grows exponentially as exp(t). To run a ancestor sim you need at least that much computing power, so in practice you will only run simulations that are a factor F smaller, that is, you have a choice of civilizations from the start of time T0 to T-ln(F) where T is the current real time. A sim of time t will potentially contain simulations earlier than t-ln(F), and so on.  
So between T0 and T0+ln(F) there will be no ancestor sims. Between T0+ln(F) and T0+2ln(F) there will be simulations of the first interval. Between T0+2ln(F) and T0+3ln(F) there will be some simulations of the first interval (of which many more can be done, since they are so small), and some simulations of the second one (which may contain simulations of the first interval). In general, in interval N there can be X sims of interval N-1, FX simulations of N-1, F^2 X sims of N-2, or F^N X sims of interval 1. In addition, some of the late interval simulations contain simulations of earlier intervals. 
So in this model, it looks like we should expect an ever increasing number of simulations of the earlier intervals, and that the ratio between the early to the late is going up exponentially. So this predicts that a random observer should predict he is in a simulation of an early interval.  



Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
_______________________________________________
 extropy-chat mailing list
 extropy-chat at lists.extropy.org
 http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
 



-- 
https://protokol2020.wordpress.com/
  

_______________________________________________ 
extropy-chat mailing list 
extropy-chat at lists.extropy.org 
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140411/a37dade2/attachment.html>


More information about the extropy-chat mailing list