<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix">On 26/12/2025 12:59, John Clark wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAJPayv1cCkLLyAuxhAYs1HXC5L-74XQ8iLS3xq65pwU8vSpqJw@mail.gmail.com">In
2024 researchers at Harvard and Google sliced one cubic millimeter
of a human brain into 5019 slices and then used an electron
microscope to take photographs of each slice, they then used the
resulting 1.4 petabytes (1,400 trillion bytes) of data to
construct a 3-D model of that cubic millimeter. They got a X-Y
resolution of 4 nanometers (a DNA molecule is 2.5 nanometers
thick) but the Z resolution<span class="gmail_default"
style="font-family:arial,helvetica,sans-serif"></span>(thickness)
was only 30 nanometers. That probably would be good enough
resolution<span class="gmail_default"
style="font-family:arial,helvetica,sans-serif"> for anĀ </span>upload
but unfortunately it was just for one cubic millimeter, the
average human brain contains about 1,<span class="gmail_default"
style="">4</span>00,000 cubic millimeters.</blockquote>
<br>
On the surface, this sounds quite discouraging, but I'd just like to
make a brain-dump on the topic, having been thinking about it for a
while:<br>
<br>
<br>
The x-y resolution mentioned was overkill, by at least 10 times, and
the z resolution less so, but still probably higher than necessary.
Let's say it was just about right, though. That means approx. 14
trillion bytes for 1 cubic mm.<br>
<br>
Times that by 1.4M for the whole brain (almost certainly not
actually needed (for our purposes), for several reasons, and as
discussed previously, a generic human brain model could probably cut
down on that considerably, with individual variations laid on top of
it), so we get 14 x 10^12 times 1.4 x 10^6 = 19.6x10^18 bytes (?
please check this, I'm on dodgy ground here, given my mathematical
challenges). Say around 20 exabytes.<br>
<br>
That's a lot, but I reckon it can be reduced a fair bit (a lot,
actually), considering that most of the brain is white matter
(axons), not grey matter (cortex, where the vast majority of the
synaptic information needs to come from), and all that's needed for
the white matter is to record each axon's origin and destination
(including branchings), and axons tend to be about 300nm in
diameter, so the resolution doesn't have to be so high as for the
cortex. Basically, what's needed for the bulk of the brain is a
wiring diagram (a whopping big, complex one, but still nothing more
than that. Probably).<br>
<br>
Plus the fact that the original resolution of the scan wouldn't be
the same as the resolution of the data derived from it, and
therefore the storage needed. Lossless compression algorithms as
well as dynamic intelligent feature extraction would reduce the
amount of scanned data on the fly, so you might be scanning 14
trillion bytes per cubic mm but condensing that into a few megabytes
or less, without losing anything relevant to reconstructing the
neural structure and function. There'd be no point recording every
300nm voxel of white matter to produce a connectome map because that
would be like creating a raster diagram when a vector diagram is far
more efficient. I think there are parallels here with the way our
brains process vision, extracting and compactly representing
specific features of our visual fields, and ignoring the rest.<br>
Yes, the scanning would have to be very detailed, but as soon as a
structure can be detected, and linked to prevous structure, that
detailed data can be discarded.<br>
<br>
When you think that we'd be effectively scanning trillions of
organic molecules, and converting their positions into a datastream,
you can see there's plenty of scope for compression, just for
starters.<br>
<br>
Unknowns include whether variations in the myelin sheaths are
important, and whether the support cells (glial cells) need to be
recorded as well or not. My money's on Not.<br>
<br>
It would probably be very useful to devise a scheme where different
aspects of the brain scan are condensed into separate maps, that can
each be stored in a relatively compact way, to later be integrated
and expanded into a functioning structure when it comes time to
actually build the upload.<br>
<br>
So overall, I expect that the 20 exabytes estimate to capture all
the relevant data from an individual brain is vastly greater than
what will eventually be really needed, once we figure out what needs
to be captured, how to encode and store it, and what can be recorded
as variations on a 'canonical human brain' model (which could
potentially make <i>enormous</i> savings on the amount of data
needed to encode an individual brain (if that turns out to be a
feasible idea!)).<br>
<br>
Another factor will be the time needed to scan an entire brain. And
there's also the problem of the scanning method dumping heat into
the tissue surrounding the area being scanned, potentially messing
up the structure and chemical environment.<br>
<br>
We probably need entirely different scanning methods to what we
currently have, or it wouldn't be remotely practical. I've no idea
how to approach that problem.<br>
<br>
All this makes me more convinced than ever that cryonic suspension
alone won't be enough to prepare a brain for the kind of scanning
that will be needed for uploading. Aldehyde stabilisation, or
something equivalent, might just be the start.<br>
<br>
Now I'm wondering if and how a brain could be dismantled - as in
separated into distinct sections - without damaging anything
essential, either before or after preservation. Then each section
could be separately scanned, in parallel. Hmmm...<br>
<br>
If axons were elastic, I wonder if a 'reverse convolution' could be
done on the cortex. Cortices, rather. I expect cerebellum would need
to be uploaded as well, which presents its own problems (smaller
neurons) and opportunities (much more regular structure than
elsewhere in the brain).<br>
<br>
Or, if the axons could somehow have markers set in their connections
to the cortical layers, then be removed for separate scanning, then
the cortices spread out into what, four or five big thin sheets ??<br>
<br>
Any other ideas?
<pre class="moz-signature" cols="72">--
Ben</pre>
<br>
</body>
</html>