[ExI] Vermis ex machina
mike at 7f.com
Sun Mar 1 02:10:26 UTC 2015
If the adjacency matrix is at least somewhat sparse, it can be
compressed a lot from there, and you can store it as a spare matrix.
I assume it is indeed sparse, given the locality of a large number of
the connections in a human brain. For compression see:
http://www.netlib.org/lapack/lawns/lawn50.ps (if my memory serves me
correctly, I can't look in here as it's ps, not loaded on this
machine, but the author is correct)
it's common in linear algebra packages:
However, I doubt someone has done the compression is for a real-time
brain-like system so far however, ie. it's compressed and you are
using it compressed for propagation type activities, rather than
uncompressing it as needed.
To calculate the savings, you'd need a measure of how sparse it is,
and I am not quite sure how to relate that to the compressed savings
at the current time. Interesting problem! You might be better off
not using an adjacency type representation - a node/edge
representation is likely the way to go, but load/save times would be
bad unless you can block it somehow. But one assumes you would just
leave it in memory most of the time :)
The compression is somewhat analogous to run-length encoding; you walk
the matrix looking for runs of (in this case) 0,1, and encode those.
The disadvantage of course is that propagation type queries take
longer. However, not so much longer that you would want to forgo the
huge savings in memory and potential cache coherence savings.
On Sat, Feb 28, 2015 at 5:46 PM, Stuart LaForge <avant at sollegro.com> wrote:
> Quoting Stuart LaForge <avant at sollegro.com>:
>> A human brain contains 86 billion neurons, so its adjacency matrix would
>> be 7.396 x 10^21 bits in size or roughly an exabyte.
> Correction 7.396 x 10^21 bits is actually closer to a zettabyte. Sorry. :-)
> Stuart LaForge
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
More information about the extropy-chat