<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix">On 28/12/2025 12:42, Adrian Tymes
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:mailman.42.1766925776.15606.extropy-chat@lists.extropy.org">
<div class="moz-text-plain" wrap="true" graphical-quote="true"
style="font-family: -moz-fixed; font-size: 12px;"
lang="x-unicode">
<pre wrap="" class="moz-quote-pre">On Sat, Dec 27, 2025 at 6:07 PM Ben Zaiboc via extropy-chat
<a class="moz-txt-link-rfc2396E"
href="mailto:extropy-chat@lists.extropy.org"
moz-do-not-send="true"><extropy-chat@lists.extropy.org></a> wrote:
</pre>
<blockquote type="cite" style="color: #007cff;">
<pre wrap="" class="moz-quote-pre">The x-y resolution mentioned was overkill, by at least 10 times, and the z resolution less so, but still probably higher than necessary. Let's say it was just about right, though. That means approx. 14 trillion bytes for 1 cubic mm.
Times that by 1.4M for the whole brain (almost certainly not actually needed (for our purposes), for several reasons, and as discussed previously, a generic human brain model could probably cut down on that considerably, with individual variations laid on top of it), so we get 14 x 10<sup
class="moz-txt-sup"><span
style="display:inline-block;width:0;height:0;overflow:hidden">^</span>12</sup> times 1.4 x 10<sup
class="moz-txt-sup"><span
style="display:inline-block;width:0;height:0;overflow:hidden">^</span>6</sup> = 19.6x10<sup
class="moz-txt-sup"><span
style="display:inline-block;width:0;height:0;overflow:hidden">^</span>18</sup> bytes (? please check this, I'm on dodgy ground here, given my mathematical challenges). Say around 20 exabytes.
That's a lot, but I reckon it can be reduced a fair bit (a lot, actually)
</pre>
</blockquote>
<pre wrap="" class="moz-quote-pre">Or don't bother. I once wrote a disk management system that could
handle up to yottabytes. There are predictions of petabyte hard
drives in the 2030s. It is quite conceivable for some future
single-device hardware, not much larger than (and perhaps
significantly smaller than) a typical adult human brain, to handle 20
exabytes. Emphasis on "future": it won't be tomorrow, but probably
this side of 2100. The preserved dead can wait that long, yes?</pre>
</div>
</blockquote>
<br>
That is a point, but really I don't see it being necessary, or a
good idea. At least, not once we know what data is needed. Even
being conservative, though, recording and storing every single 300nm
voxel of the entire white matter of a brain seems wasteful to say
the least, if what's needed is the start and endpoint of each axon
or axon branch, plus perhaps some extra data that applies to its
entire length. Even if there's more than that needed (let's say,
just for argument's sake, it turns out that data on each of the
nodes of Ranvier on each axon is useful (for some unimaginable (to
me) reason), as well as the diameter of the axon, and let's throw in
a few more data points just for wiggle room) you can still distill
this information from the scan as it proceeds, probably in several
cascading steps, but the details of the process don't matter here,
the point is you end up with much much less raw data to be stored,
at very little cost in terms of some processing of the scan data. I
don't see why this wouldn't be a good idea.<br>
It would mean more people could be stored in the same amount of
memory, and make the process of creating uploads from the data
easier and quicker.<br>
<br>
Maybe the cortical scans would be a different matter, and storing
every single voxel would be a good idea, but I can't see why the
strategy wouldn't work well for the white matter, which makes up the
bulk of a brain. Why record every grain of sand in a desert when
you're actually interested in recreating the shapes and positions of
the sand dunes?<br>
<br>
The problems with being content with preservation and waiting is
that we can't be sure how good our preservation protocols are in the
first place, and preserved people can't make decisions about things
that they can't predict beforehand. I think it would be preferable
to make the waiting period as short as possible, and do everything
we can to make that happen. Why wait until handling 20 exabytes is
routine when we can already handle terabytes with current,
unremarkable technology, for the cost of figuring out some
tissue-data algorithms now rather than later? Sounds like a good bet
to me.<br>
<br>
Less data is also easier to back up, as well.<br>
<br>
<pre class="moz-signature" cols="72">--
Ben</pre>
<br>
</body>
</html>