[ExI] More Fermi Paradox

Anders Sandberg anders at aleph.se
Wed May 15 10:52:28 UTC 2013


On 2013-05-15 10:19, Eugen Leitl wrote:
> On Wed, May 15, 2013 at 10:12:36AM +0100, Anders Sandberg wrote:
>
>> It all depends on what the ultimate goal is. If it is
>> experience-moments or pleasure, then spread far and wide and convert
>> everything to computronium or hedonium. If the goal requires a
>> cohesive big mind for a long time, then you only need a supercluster
> I don't see why you can't have GYr scale coherent plans with
> ~ps..~fs local refresh rates.

You misunderstood me. I was just talking about the total amount of stuff 
to colonize and whether it could form cohesive computational structures, 
not the refresh rate. Either of the above approaches benefit from high 
refresh rates, although the vast mind will likely have at least some 
serial components causing an Amdahl's law slowdown.

On the big scales the constraints are (1) if you can accept parts of 
your domain losing contact forever (due to accelerating expansion moving 
non-bound systems beyond the horizon), (2) how much stuff you can reach 
(depends on speed and start time), (3) your trade-off between update 
speed, memory storage and energy usage.

If you have M units of mass divided into memory cells of mass m, the 
minimal energy dissipation per second due to error correction scales as 
kTln(2)(M/m)exp(-qm), where q is some constant linked to the 
tunneling/error probability. For fast dissipation getting rid of the 
heat is a big problem and likely limiting things; using Wei Dai-style 
black hole cooling or radiating it towards the cosmological horizon has 
a limited thermal emission ability.  For slow dissipation your overall 
mass M will decline as an exponential with constant kTln(2)exp(-qm)/mc^2 
- big m allows you to last long, but you get few bits. T also declines 
towards an asymptote due to horizon radiation, so waiting is rational 
only up to some time.

Note that this assumes all computations to be reversible. I recently 
checked quantum computation and error correction, and it is pretty 
awesome... but you still need to erase ancilla bits once they have 
become error tainted. Using quantum computation allows you to get a lot 
of computation done in few steps, but m will be very low so you will 
have to pay a lot for it.

In a universe where little but your activities are going on external 
time does not matter much. So running things slowly is OK if you get 
more ops. But before that a spoiler civilization might want to do their 
short-term projects that use "hot" low-m, high T very wasteful 
computation; there is some interesting game theory here about how 
different goals might try to pre-empt each other.

> Speaking about clusters: 
> https://docs.google.com/file/d/0B83UyWf1s-CdZnFoS2RiU2lJbEU/edit?usp=drive_web 
> Pony: not yours. At least not by 2020. Little novelty there for anyone 
> who's been paying attention, but this is mainstream now.

In fact, for being a pessimistic lecture it is pretty optimistic. Maybe 
we won't get ponies, but cats.

-- 
Dr Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University




More information about the extropy-chat mailing list