[ExI] Can you avoid information theoretic death via 1080p? Re: pets, mirrors and cryonics

Brian Atkins brian at posthuman.com
Sun Nov 4 22:12:06 UTC 2012


Interesting blog post and extrobritannia thread, I looked it up - for anyone 
interested the relevant posts are in April 2012: 
http://groups.yahoo.com/group/extrobritannia/messages/14700

I was struck by your "Number of brains" section that it seems a little too 
simple? Now I'm not a neuroscientist, yet at a high level aren't most human 
brains fairly similar in many respects? Or in other words I doubt that all 10^11 
neurons I have are all contributing much to making me unique. Unique in terms of 
Merkle's information-theoretic death definition:

"A person is dead according to the information-theoretic criterion if their 
memories, personality, hopes, dreams, etc. have been destroyed in the 
information-theoretic sense..."

I'd guess you could upgrade significant chunks of a reconstruction of my mind 
with updated/superior parts, such as for example parts of the visual cortex, yet 
if the reconstruction had mostly the same "memories, personality, hopes, dreams, 
etc." as myself I would feel pretty pleased with that outcome. In other words I 
think there are probably big chunks of my brain that aren't really the very key 
bits that I feel I absolutely would need to define myself. Again I'm not a 
neuroscientist, so perhaps this is a naive viewpoint.

So for me, information-theoretic death is more of a fuzzy sliding scale. Ranging 
from absolutely nothing being able to be reconstructed, and up to some sort of 
near-perfect nanotech upload process that maps every neuron and connection. 
Personally I could "live with" something well down this scale, and I'd prefer it 
if the alternative option was nothing at all. But your particular model seems to 
be situated much closer to the perfect upload end of the scale, minus a 10% 
disease/damage allowance. Is it possible to estimate the size of just the "key 
bits"? How many neurons out of the 10^11 really matter a lot in making me me, 
how many are borderline relevant, and how many could probably be replaced with 
ones from someone else and I wouldn't notice?

Regarding the "Human information output" section, it seems also a bit too 
simple. Does this analysis take into account extra bits of output we can get by 
inferring what is going on in the subject's mind based on captured behavior? The 
authors of the spoken dialog entropy paper for example say their analysis does 
not take this into account. Can a future reconstruction system analyze a brief 
microexpression on your face and based on everything else it knows use this to 
get more bits of output? If so, how do we determine what the real output number 
is - it seems to be based perhaps in part on the skills of the future analysis 
program and perhaps the total size of the available dataset, just as the dialog 
entropy paper results are based on the skills and analysis technique of the 
researchers?

For example a naive analysis of a particular facial muscle move in a video might 
just count it as one or two bits of information - "face muscle z moved for x 
milliseconds". Yet a more advanced analysis, based on complete knowledge of 
typical human mind neural network behavior, muscle behavior, and both past and 
future (assuming you analyze further video later and then back-propagate your 
results) analyses, etc. might allow you to get much more data out of that 
particular muscle move. One muscle move in a certain way at a certain time in a 
certain conversation, might lead an advanced analysis program to throw out 
entire classes of possible neural structures. Curious what you think.

No matter what, I agree, this data would have its uses in multiple other ways. I 
hope I can construct a system that will capture the most relevant bits possible, 
yet also in the least annoying way. Still working on figuring out if we've 
reached a high enough quality bits per annoyance per dollar ratio with today's tech.



More information about the extropy-chat mailing list