<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">On 12/10/2013 17:59, John Clark wrote:<br>
</div>
<blockquote
cite="mid:CAJPayv1O1Q4pcNeC34pXJfcorVfY0XqciRU3iOQJB92wBQ=7DQ@mail.gmail.com"
type="cite">
<div dir="ltr">On Fri, Oct 11, 2013 at 4:37 AM, Eugen Leitl <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:eugen@leitl.org" target="_blank">eugen@leitl.org</a>></span>
wrote:<br>
<div class="gmail_extra">
<div class="gmail_quote"><br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex"> > there's a
widespread tendency to underestimate what
evolutionary-driven biology has managed to accomplish in<br>
a few gigayears. A synapse is pretty damn small<br>
Synaptic active zone diameter: 300 ± 150 nm<br>
Synaptic vesicle diameter: 35 ± 0.3 up to 50 nm<br>
</blockquote>
<div><br>
</div>
<div>Yes but unlike the 22 nm 3D transistors that you have
in your computer right now (or the 14 nanometer ones in
the Broadwell chip when Intel ships it in 2014) a synapse
cannot switch from on to off without the aid of a much
much larger structure, an entire neuron, or rather 2
entire neurons. Oh and then there is the fact that the
typical neuron firing rate varies depending on the neuron,
about 10 per second for the slowpokes and 200 times a
second for the speed daemons; but the typical transistor
in your computer fires somewhere north of 3 BILLION times
a second. <br>
</div>
</div>
</div>
</div>
</blockquote>
<br>
This kind of calculation easily becomes an apple-and-orange
comparison. How many transistors are functionally equivalent to one
synapse?<br>
<br>
If we take the basic computational neuroscience model, an incoming
spike gets converted to a postsynaptic potential. This is typically
modelled as the membrane potential of the postsynaptic neuron
getting a beta function added to it (like w_ij H(t-t_0)
(t-t_0)exp(-k(t-t_0)), where w_ij is the weight, H the Heaviside
function, t_0 the time of the spike, and k some constant). Another
common approach is to have a postsynaptic potential that acts as a
leaky integrator (P'=-kP + w_ij delta(t-t_0), V(t)=P(t)+<other
electrophysiological activity>). In a crude integrate-and-fire
model we do away with the electro-physiology and just keep the P
potential, causing the recipient neuron to fire (and reset P to 0)
if it goes above a threshold. <br>
<br>
Clearly we need to at least be able to add a synaptic weight to some
other state variable, and this variable needs to have at least a few
bits of resolution. Doing this with transistors requires more than
one (28 transistors for a full adder, and far more for a
multiplier).<br>
<br>
Note that this has ignored synaptic adaptation (w_ij should decrease
if the synapse is used a lot over a short time, and then recover)
and plasticity (w_ij should potentiate or not depending on
correlations between neuron i and j). These require fairly involved
calculations depending on model used; each state variable likely
needs some adders and multipliers too. <br>
<br>
In fact, some approaches to neuromorphic hardware try to use analog
electronics to get away from the messiness of adders and multipliers
- the above operations can be done relatively neatly that way using.
But the power, precision and low price of digital electronics tends
to win most of the time. <br>
<br>
In the end, it is not obvious to me that a digital synapse can be
made using silicon tech smaller than a real synapse. I would be
surprised if an analog couldn't be done. Similarly speeding things
up might be eminently doable, but while digital systems can vary
clock frequencies continuously an analog synapse would actually be
stuck at a single speed. <br>
<br>
<pre class="moz-signature" cols="72">--
Anders Sandberg,
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
Oxford University </pre>
</body>
</html>