[ExI] Smallest human-equivalent device
Anders Sandberg
anders at aleph.se
Sat Oct 12 17:36:14 UTC 2013
On 12/10/2013 17:59, John Clark wrote:
> On Fri, Oct 11, 2013 at 4:37 AM, Eugen Leitl <eugen at leitl.org
> <mailto:eugen at leitl.org>> wrote:
>
> > there's a widespread tendency to underestimate what
> evolutionary-driven biology has managed to accomplish in
> a few gigayears. A synapse is pretty damn small
> Synaptic active zone diameter: 300 ± 150 nm
> Synaptic vesicle diameter: 35 ± 0.3 up to 50 nm
>
>
> Yes but unlike the 22 nm 3D transistors that you have in your computer
> right now (or the 14 nanometer ones in the Broadwell chip when Intel
> ships it in 2014) a synapse cannot switch from on to off without the
> aid of a much much larger structure, an entire neuron, or rather 2
> entire neurons. Oh and then there is the fact that the typical neuron
> firing rate varies depending on the neuron, about 10 per second for
> the slowpokes and 200 times a second for the speed daemons; but the
> typical transistor in your computer fires somewhere north of 3 BILLION
> times a second.
This kind of calculation easily becomes an apple-and-orange comparison.
How many transistors are functionally equivalent to one synapse?
If we take the basic computational neuroscience model, an incoming spike
gets converted to a postsynaptic potential. This is typically modelled
as the membrane potential of the postsynaptic neuron getting a beta
function added to it (like w_ij H(t-t_0) (t-t_0)exp(-k(t-t_0)), where
w_ij is the weight, H the Heaviside function, t_0 the time of the spike,
and k some constant). Another common approach is to have a postsynaptic
potential that acts as a leaky integrator (P'=-kP + w_ij delta(t-t_0),
V(t)=P(t)+<other electrophysiological activity>). In a crude
integrate-and-fire model we do away with the electro-physiology and just
keep the P potential, causing the recipient neuron to fire (and reset P
to 0) if it goes above a threshold.
Clearly we need to at least be able to add a synaptic weight to some
other state variable, and this variable needs to have at least a few
bits of resolution. Doing this with transistors requires more than one
(28 transistors for a full adder, and far more for a multiplier).
Note that this has ignored synaptic adaptation (w_ij should decrease if
the synapse is used a lot over a short time, and then recover) and
plasticity (w_ij should potentiate or not depending on correlations
between neuron i and j). These require fairly involved calculations
depending on model used; each state variable likely needs some adders
and multipliers too.
In fact, some approaches to neuromorphic hardware try to use analog
electronics to get away from the messiness of adders and multipliers -
the above operations can be done relatively neatly that way using. But
the power, precision and low price of digital electronics tends to win
most of the time.
In the end, it is not obvious to me that a digital synapse can be made
using silicon tech smaller than a real synapse. I would be surprised if
an analog couldn't be done. Similarly speeding things up might be
eminently doable, but while digital systems can vary clock frequencies
continuously an analog synapse would actually be stuck at a single speed.
--
Anders Sandberg,
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20131012/2ef66203/attachment.html>
More information about the extropy-chat
mailing list