[ExI] Mind Uploading: is it still me?

Ben Zaiboc ben at zaiboc.net
Tue Dec 30 16:03:48 UTC 2025


On 28/12/2025 12:42, Adrian Tymes wrote:
> On Sat, Dec 27, 2025 at 6:07 PM Ben Zaiboc via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
>> The x-y resolution mentioned was overkill, by at least 10 times, and the z resolution less so, but still probably higher than necessary. Let's say it was just about right, though. That means approx. 14 trillion bytes for 1 cubic mm.
>>
>> Times that by 1.4M for the whole brain (almost certainly not actually needed (for our purposes), for several reasons, and as discussed previously, a generic human brain model could probably cut down on that considerably, with individual variations laid on top of it), so we get 14 x 10^12 times 1.4 x 10^6 = 19.6x10^18 bytes (? please check this, I'm on dodgy ground here, given my mathematical challenges). Say around 20 exabytes.
>>
>> That's a lot, but I reckon it can be reduced a fair bit (a lot, actually)
> Or don't bother.  I once wrote a disk management system that could
> handle up to yottabytes.  There are predictions of petabyte hard
> drives in the 2030s.  It is quite conceivable for some future
> single-device hardware, not much larger than (and perhaps
> significantly smaller than) a typical adult human brain, to handle 20
> exabytes.  Emphasis on "future": it won't be tomorrow, but probably
> this side of 2100.  The preserved dead can wait that long, yes?

That is a point, but really I don't see it being necessary, or a good 
idea. At least, not once we know what data is needed. Even being 
conservative, though, recording and storing every single 300nm voxel of 
the entire white matter of a brain seems wasteful to say the least, if 
what's needed is the start and endpoint of each axon or axon branch, 
plus perhaps some extra data that applies to its entire length. Even if 
there's more than that needed (let's say, just for argument's sake, it 
turns out that data on each of the nodes of Ranvier on each axon is 
useful (for some unimaginable (to me) reason), as well as the diameter 
of the axon, and let's throw in a few more data points just for wiggle 
room) you can still distill this information from the scan as it 
proceeds, probably in several cascading steps, but the details of the 
process don't matter here, the point is you end up with much much less 
raw data to be stored, at very little cost in terms of some processing 
of the scan data. I don't see why this wouldn't be a good idea.
It would mean more people could be stored in the same amount of memory, 
and make the process of creating uploads from the data easier and quicker.

Maybe the cortical scans would be a different matter, and storing every 
single voxel would be a good idea, but I can't see why the strategy 
wouldn't work well for the white matter, which makes up the bulk of a 
brain. Why record every grain of sand in a desert when you're actually 
interested in recreating the shapes and positions of the sand dunes?

The problems with being content with preservation and waiting is that we 
can't be sure how good our preservation protocols are in the first 
place, and preserved people can't make decisions about things that they 
can't predict beforehand. I think it would be preferable to make the 
waiting period as short as possible, and do everything we can to make 
that happen. Why wait until handling 20 exabytes is routine when we can 
already handle terabytes with current, unremarkable technology, for the 
cost of figuring out some tissue-data algorithms now rather than later? 
Sounds like a good bet to me.

Less data is also easier to back up, as well.

-- 
Ben

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251230/2820aa5e/attachment.htm>


More information about the extropy-chat mailing list