[ExI] new neuron learning theory

Stuart LaForge avant at sollegro.com
Sun May 1 04:35:33 UTC 2022


Quoting Darin Sunley:

>
> Perceptron-style "neurons" were a simplified caricature of how neurologists
> thought neurons /might/ work back in the 70s, even when they were first
> implemented.

Yes, ML neurons are very simple compared to real neurons. But neither  
the moon nor the earth are point masses. Yet assuming that they were  
point masses, yet mathematical models which assumed they were point  
masses allowed us to send people to the moon and back.

> Time and neurological research hasn't been kind to the comparison.
>
> At this point, the only similarity between the basic elements of
> network-based machine learning algorithms and mammalian brain cells is the
> name. ML "neurons" are basically pure mathematical abstractions, completely
> unmoored from anything biological cells actually do.

As I have stated elsewhere, one key similarities between ML neurons  
and biological neurons are that both are non-linear with respect to  
inputs and outputs. Also, both are found in networks with their peers  
where each assumes unique parameters.

Stuart LaForge




More information about the extropy-chat mailing list