[ExI] for the fermi paradox fans

Dennis May dennislmay at yahoo.com
Sat Jun 14 00:32:48 UTC 2014


It would seem that the need for exponentially growing 
overhead to correct for noise introduces an even larger
exponentially growing overhead to watch for potential
virus-like information [coded in the form of white noise?] 
whose purpose might be nothing more than to introduce
factors generating more noise in the system.

In other words exponential susceptibility to noise also
generates an even larger exponential susceptibility to
sabotage exploiting that exponential susceptibility to 
noise.  Not to mention accidentally evolving or introduced
code which might cause this type of issue.

If you thought computer viruses were an issue now
wait until the noise correction methodology itself can
be targeted.

Exponentially long codes taking advantage of large 
capabilities are also a source of exponential susceptibility.

It all goes back to the logic of castles vs cannon balls.

http://www.historylearningsite.co.uk/end_of_castles.htm

How large and defensible can a system grow before 
countermeasures evolve to overwhelm the advantages
size creates?

In our case what is the optimal size/type of intelligence before
it is undone by internal and/or external factors.

Dennis May
 

________________________________
 From: Anders Sandberg <anders at aleph.se>
To: ExI chat list <extropy-chat at lists.extropy.org> 
Sent: Friday, June 13, 2014 1:13 PM
Subject: Re: [ExI] for the fermi paradox fans
  


Continuing my thinking about extreme future computation:

http://arxiv.org/abs/quant-ph/9611028 "Limitations of noisy reversible computation" shows that noise is pretty bad for reversible computations: the total size of circuits need to grow exponentially with their depth in order to produce reliable computation (in normal circuits the growth is just polynomial). This is also true for quantum computations. Both kinds of circuits can simulate general classical or quantum circuits, but the cost is exponential. Of course, the glass-is-half-full view is that one can build working reliable systems out of noisy components (a la http://arxiv.org/abs/quant-ph/9906129 ) if the noise is below a certain level: you just need to pay a price for it. Very much like error-correcting codes in classical channels. But this shows that intricate messages require exponentially long codes, so to say. 

So if extreme future life runs on largely reversible computers (classical or quantum) it needs to perform a Bennet-style undoing of intermediate results (http://www.math.ucsd.edu/~sbuss/CourseWeb/Math268_2013W/Bennett_Reversibiity.pdf) relatively often not just because error correction is expensive (if N bits are involved in the error syndrome then they all need to be reset at a cost) but because the retrace corresponds to a long circuit depth and the total circuit size hence goes up exponentially. So the amount of hardware grows with retrace length R as exp(R), and presumably the number of bit errors that need to be fixed also grows proportional to it - eventually it is better to just wipe the memory state completely rather than try to erase syndromes and retrace.


Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University
_______________________________________________
extropy-chat mailing list
extropy-chat at lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140613/a1db6d0a/attachment.html>


More information about the extropy-chat mailing list