[ExI] The size of individual self

Eugen Leitl eugen at leitl.org
Sun Apr 28 16:48:33 UTC 2013

On Sun, Apr 28, 2013 at 11:34:09AM -0400, Rafal Smigrodzki wrote:

> > Ten bux and the annotated connectome alone sez you're off by many orders
> > of magnitude.
> ### You might want to go to my posts from a few months ago, I
> explained what I meant. Briefly, I do not believe that precise mapping
> of my mind at synapse level is needed to reconstruct my subjective
> experience. Most of my base-level sensory and other percepts are
> generic - therefore, you could for example use a generic visual
> cortex, maybe with a few tweaks, instead of an exact copy of mine, and
> the resulting mental imagery would be sufficiently close to claim
> functional equivalence. Extending this reasoning to the whole brain
> means using a very generic human brain with a thin veneer of

You assume there's an Everyman template, and that the diffs across
human population are are negligible. I'm not going to buy that 
without a lot of evidence.

> personalization, similar to describing an individual human genome in
> terms of a small list of deviations from a reference human genome,
> which does offer a few orders of magnitude savings on storage space.

I agree that one can derive a somewhat (though perhaps not dramatically,
the connectome is not the genome) more compact representaion that is 
isofunctional in respect to external and internal experience, and
one that needs to be co-evolved for the particular hardware target.

However, even if this could be done, you will still need the delta to
Everyman, and that one will a veritable torrent of data from the 
destructive scanner, and a Humongous Buffer to store it for processing.

I would be interesting to see how much compression starting with
raw voxelset we can get. Arguably wavelet can give you at least 1:10
if not 1:100 (though latter could introduce enough artifacts to
throw a monkeywrence into the feature segmentation), and the 
trace will be even much more compact. However, I think you will need 
the fully annotated feature trace at least at 1-5 nm (with the annotation 
potentially derived from selective sampling to much higher resolution)
as a point of departure for these hypothetical more compact
representations. And I feel that encoding will be quite computation-intensive/
time consuming (but highly paralellizable).

More information about the extropy-chat mailing list