[ExI] how large is a human mind?

Anders Sandberg anders at aleph.se
Tue Jan 1 13:02:34 UTC 2013


On 2013-01-01 03:24, Rafal Smigrodzki wrote:
> Does anybody know how large is the individual, indexical part of an
> average human?

No. I think an answer to this question would be a profound step forward 
in neuroscience. In particular, it depends on what level of resolution 
an emulation needs to be done at, which is equivalent to answering 
roughly what kind of functional processing is going on in the whole CNS.

I suspect the answer is that the size of the individual mind is large.

This is an interesting paper:
Ju Lu, Juan Carlos Tapia, Olivia L White, Jeff W Lichtman.The 
Interscutularis Muscle Connectome, PLOS biology:
http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1000032
looking at the connectivity in peripheral nerves within the same animal 
(left and right) and between animals. They found differences in topology 
within an animal on par with the differences between animals. If this 
were to hold in the CNS we should expect big differences in local 
connectivity from place to place even when they do the same thing.

As one of the comments mention there is also plenty of variability in 
mRNA expression and ionic conductances in neurons of the same type 
(Schulz DJ, Goaillard JM, Marder E. (2006) “Variable channel expression 
in identified single and electrically coupled neurons in different 
animals.” Nat. Neurosci. 9(3):356-62.) It should be noted that there are 
also evidence that some aspects of connectivity is fairly regular (Hill 
et al., Statistical connectivity provides a sufficient foundation for 
specific functional connectivity in neocortical neural microcircuits, 
PNAS 2012: http://www.pnas.org/content/early/2012/09/17/1202128109.short 
) - however, their results only explain 75% of the synapse locations, so 
there is plenty of variability even in a possibly highly constrained system.

Whether different connectivities and properties mean different minds is 
the hard part. It is not hard to make neural networks with different 
connectivities and weights that do the same thing: we want to know the 
*functional* difference. For all we know, maybe the parameters co-vary 
so that we get exactly the same result. As far as I know there is no 
good measure for this (I have been tinkering with the question on and 
off - anybody know a good way of comparing two phase spaces with each 
other?) But we know that changes on an *individual neuron level* can be 
enough to cause macroscopically different behavior (Houweling, A., & 
Brecht, M. (2008). Behavioural report of single neuron stimulation in 
somatosensory cortex. Nature, 451(7174), 65-8.) - there is a good chance 
that these tiny differences add up to a great deal of individuality.

Now if (say) 25% of cortical synapses are individually located, that 
means around 3.75e13 synapses. A neuron address is around 84-107 bits 
(depending on neuron number and whether we need to address individual 
compartments) - let's say 100 bits. That is about 426 terabytes. (There 
is information in the synapses too (potentiated or not, various time 
constants) but if it is less than 100 bits per synapse it doesn't matter).

The 426 terabytes can certainly be compressed a bit. Neurons in one 
region often project to another region with a high probability along 
bundles, so their addresses are correlated. If we have ~100 areas and 
each area projects mostly to 10 other areas (handwave, handwave:
http://www.aleph.se/andart/archives/2008/02/connecting_with_the_macaque.html
http://www.aleph.se/andart/archives/2004/04/taking_a_cat_map.html ) then 
you just need 3.3 bits to say which area and then 30-39 bits for the 
individual target neuron (or compartment). That is a saving of about 60 
bits per synapse, and we are already down to just 170 terabytes. If that 
process were to be repeated inside areas (a lot of them have 
retino/tono/topo-topic structures after all) we might get it down by one 
or two factors more, to 68 or 27 Tb compressed.

If cortical minicolumns are what matters, then there is just a need for 
about 27 bits per neuron, and we need just 110 terabytes. Repeating the 
above argument, we get down to 44 or 17 terabytes when compressing.



> My vague feeling is that the lower part of the range is more plausible
> but I am giving this very large range of numbers to express my lack of
> confidence in the estimates. Still, it should be rather cheap to pay
> for individual storage which means the competition for survival in the
> M-brain substrate could be about paying for single gigabytes of
> storage space.

The cost of storing and running a mind does not depend on absolute size 
but relative size and demand. If a mind requires m units of resources 
and M units are available, and there are N minds, we should expect the 
cost to be some increasing function of mN/M, likely concave (prices go 
way up when the M-brain is crowded to capacity). So we can model it as 
price = (mN/M)^a, where a>1. If the value of a mind is on average V, we 
should expect more minds being made until (mN/M)^a=V, or N=(M/m)V^(1/a). 
So the big determinant might be V and a, M/m just sets the scale.


-- 
Anders Sandberg
Future of Humanity Institute
Oxford University



More information about the extropy-chat mailing list