[ExI] for the fermi paradox fans

Anders Sandberg anders at aleph.se
Wed Jun 11 22:28:12 UTC 2014


Another thing I would love to find out is the mass/depth trade-off for memory storage. Suppose you have a lot of mass and want to turn it into as much good computer memory as possible. What configuration is best? 
The tunnelling probability across a potential barrier scales as exp(-L sqrt(m E)) where L is the width of the barrier, m the particle mass and E the potential depth. The energy or negentropy losses due to tunnelling will be proportional to this. You could spend the mass on really deep potential wells, or on making them physically wide (or even use heavy objects to represent your bits). Which is the best approach? What is the scaling of depth you get from large amounts of mass?
If you have N bits of mass m1 (initially =M/N), they will need correction N exp(-L sqrt(m E)) times per second; eventually you will run out of stored negentropy and have to burn mass to radiate to the background radiation. So each second you will lose N exp(-L sqrt(m E)) kTln(2)/m1 c^2 bits;  N' = -lambda N, where lambda = kT ln(2) exp(-L sqrt(m E))/m1 c^2. So the half-life of computer memory in this phase will be inversely proportional to temperature, exponential in bit size, exp-sqrt in bit marker mass and potential depth and proportional to bit mass. So it looks like making bits *really large* is a good idea.
One figure of merit might of course be total number of bit-seconds. That scales as integral_0^infty N dt = [ - exp(-lambda t) / lambda]_0^infty = 1/lambda, i.e. proportional to half-life. However, the initial number scales as 1/m1, so the m1 factor disappears: it is not the bit mass that matters, just temperature, size, marker mass and energy. 
So, giant positronium bits, anyone?

Anybody know how to estimate the max size of gravitationally bound aggregates in current cosmological models?

Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University


Robin D Hanson <rhanson at gmu.edu> , 10/6/2014 8:56 PM:
  On Jun 10, 2014, at 5:04 AM, Anders Sandberg <anders at aleph.se> wrote:
          So there is less obviously a reason to wait to spend entropy. The max entropy usually comes via huge black holes, and those can take time to construct and then to milk. That seems to me to place the strongest limits on when we expect negentropy to get  spent.       
  I don't think time is the resource that is most costly if you try to maximize the overall future computations of your lightcone. Capturing dark matter with black holes seems wortwhile, but I wonder about the thermodynamic cost of doing it.    
  There is another reason to go slow: In reversible computers, as in other reversible systems, the entropy cost is proportional to the rate. That is, the entropy cost per gate operation is inverse in the time that operation takes. In the limit of going very slowly,  the entropy cost per operation approaches zero.   
    Robin Hanson  http://hanson.gmu.edu
 Res. Assoc., Future of Humanity Inst., Oxford Univ.
 Assoc. Professor, George Mason University Chief Scientist, Consensus Point
 MSN 1D3, Carow Hall, Fairfax VA 22030
 703-993-2326 FAX: 703-993-2323  
  
  
    

_______________________________________________ 
extropy-chat mailing list 
extropy-chat at lists.extropy.org 
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140612/4307eb12/attachment.html>


More information about the extropy-chat mailing list