[ExI] More Fermi Paradox

Anders Sandberg anders at aleph.se
Fri May 17 11:07:39 UTC 2013


On 2013-05-15 12:11, Eugen Leitl wrote:
> On Wed, May 15, 2013 at 11:52:28AM +0100, Anders Sandberg wrote:
>
>> You misunderstood me. I was just talking about the total amount of
>> stuff to colonize and whether it could form cohesive computational
> The total colonizable area is limited by how soon you start and
> how hard you travel. For us that would be something like
> 16+ GYrs, which is not too bad.

And it is the speed of travel that has the biggest effect. If you travel 
at v+w but start t units of time later than something travelling at v, 
you will catch up (in co-moving coordinate space) when (v+w)(T-t)=vT, or 
T=(1+v/w)t. Since travel times to remote galaxies are in gigayears, even 
a fairly small w increment means you will sweep past the slower 
travellers long before the destination.

>> benefit from high refresh rates, although the vast mind will likely
>> have at least some serial components causing an Amdahl's law
>> slowdown.
> I disagree. There are no serial sections in biology, everything
> is asynchronous at the bottom. The top processes may appear
> serial (just as we're communicatinig now by a serial stream
> of ASCII), but that does not translate down weell.

No guarantee that there are no serial elements to the thought processes 
of the Highest Possible Level of Development intelligences.

I think the main problem is dependencies. M-brain A needs some data from 
M-brain B a lightyear away to continue its calculation, so it needs to 
delay for two years while waiting for it. Sure, the structure of the big 
computation has been optimized as far as possible, but it seems that 
many big problems have a messy structure that might preclude perfect 
optimization (since you do not yet know the path the calculation will 
take) and performing re-adjustments of what is stored where is costly.

>> On the big scales the constraints are (1) if you can accept parts of
>> your domain losing contact forever (due to accelerating expansion
> You're shedding skin cells, and not noticing it much, so
> I don't see how shedding periphery (which is very slow initially,
> and only picks up in earnest in the last years and months
> of the universe) is going to be a problem.

I am basing this on the standard model based on WMAP data, which is 
essentially approaching a de Sitter spacetime. No "last months" but 
rather exponentially growing separation of superclusters into islands 
with no causal contact.

>
>> Dai-style black hole cooling or radiating it towards the
>> cosmological horizon has a limited thermal emission ability.  For
>> slow dissipation your overall mass M will decline as an exponential
>> with constant kTln(2)exp(-qm)/mc^2 - big m allows you to last long,
>> but you get few bits. T also declines towards an asymptote due to
>> horizon radiation, so waiting is rational only up to some time.
>>
>> Note that this assumes all computations to be reversible. I recently
> Reversible computation should be slow, and might be too slow for
> practical purposes.

Only on this list can you hear somebody say that reversible computing 
might be impractical, while not remarking on cooling using inverse Dyson 
shells around black holes :-)


>
>> checked quantum computation and error correction, and it is pretty
>> awesome... but you still need to erase ancilla bits once they have
>> become error tainted. Using quantum computation allows you to get a
>> lot of computation done in few steps, but m will be very low so you
>> will have to pay a lot for it.
> I'm not counting on nonclassical computation. I expect there won't
> be a free lunch anywhere.

Nah, quantum computation seems to be better than classical for a bunch 
of problems. It might not be better at all problems (the low m issue, as 
well as the lack of cloning), but I would suspect it would be a 
component of a sufficiently advanced infrastructure.

Grover-style O(sqrt(N)) search is pretty nifty - you can really reduce 
the number of function evaluations a lot. If you have data in a sorted 
list, sure, O(log(N)) beats it. But in many cases your data might be 
implicit, such as looking for roots of arbitrary equations or 
satisfiability. There is even a very cool quantum algorithm based on 
scattering for finding winning game strategies in O(sqrt(N)) time ( 
http://www.scottaaronson.com/blog/?p=207 - Moore and Mertens' "The 
Nature of Computation" ends with a very lucid description of the 
method). So I think HPLDs might have reason to make use of quantum 
computation. And of course, to simulate quantum stuff quantum computing 
is very effective.

The fact that at late eras the universe is cold, quiet and tame doesn't 
mean one should use slow algorithms: the energy clock is still ticking 
due to error correction losses.

-- 
Dr Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University




More information about the extropy-chat mailing list