<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">On 2013-10-13 17:04, John Clark wrote:<br>
</div>
<blockquote
cite="mid:CAJPayv1rsqze5Nysvis8VJKHbyQKtWEzmsZjd_9hnwNFt4PvsA@mail.gmail.com"
type="cite">
<div dir="ltr">On Sat, Oct 12, 2013 at 1:36 PM, Anders Sandberg <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:anders@aleph.se" target="_blank">anders@aleph.se</a>></span>
wrote:<br>
<div class="gmail_extra">
<div class="gmail_quote">
<br>
<blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px
solid rgb(204,204,204);padding-left:1ex"
class="gmail_quote">> How many transistors are
functionally equivalent to one synapse?<br>
</blockquote>
<br>
I don’t know but I can figure out how many modern
transistors you could fit inside 2 neurons.</div>
</div>
</div>
</blockquote>
<br>
In computational neuroscience the typical fine-grained neuron model
is divided into electrically isopotential compartments, typically
corresponding to each segment of the dendritic and axonal branches.
A reasonable estimate is that there are as many compartments as
synapses, so the total number is twice the synapse number (synapses
also count). A typical neuron has around 8000 synapses, so 16,000
compartments is likely. Each compartment has at least a membrane
potential and some channel states (in the Hodgkin–Huxley model you
have 3-6 depending on how you slice the activations). Izhikevich
estimated the cost as around 1000 FLOPS per compartment. This is
likely an underestimate when you add extra channels and synaptic
properties, but they just multiply the guesstimate a bit. So I would
be surprised if a synapse takes more than 10,000 FLOPS, even if you
try to model a lot of state. Assuming 2000 FLOPS per compartment
gives an overall cost of 32 MFLOPS per neuron. <br>
<br>
<br>
<blockquote
cite="mid:CAJPayv1rsqze5Nysvis8VJKHbyQKtWEzmsZjd_9hnwNFt4PvsA@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote"> Using technology that Intel will
mass produce next year they can build a transistor inside a
3*10^3 cubic nanometer box, or about 10^10 transistors in
the volume occupying 2 neurons.</div>
</div>
</div>
</blockquote>
<br>
That ought to be enough. Even if we assume 1000 transistor per
operation, we should have more than enough. Not to mention a big
speed advantage.<br>
<br>
The deep mess might be the change in configuration that happens
during plasticity. Synapses grow and find targets on a hour/day
timescale. This means the network topology is slightly mutable. Just
assuming a fixed circuit network will not do. I think this is not
too hard to handle with interconnects, but they are pretty big
circuits. <br>
<br>
<blockquote
cite="mid:CAJPayv1rsqze5Nysvis8VJKHbyQKtWEzmsZjd_9hnwNFt4PvsA@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote"> Granted you couldn’t (yet) pack
transistors at that density throughout a volume as large as
the human brain due to heat considerations, but imagine what
will be practical in just a few years.<br>
</div>
</div>
</div>
</blockquote>
<br>
This is why I have high hopes for quantum dot cellular automata and
other near-reversible tech. <br>
<br>
<pre class="moz-signature" cols="72">--
Dr Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University
</pre>
</body>
</html>