[ExI] Dreaming AI
Stuart LaForge
avant at sollegro.com
Sat Jun 11 16:34:56 UTC 2022
Quoting BillK:
> On Tue, 7 Jun 2022 at 21:07, William Flynn Wallace via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
>>
>> Can AI really develop new ways of thinking? Or is this about
>> something else?
>> To those of you who know far more about AI than me, what is going
>> on here? bill w
>> _______________________________________________
>
> AI already has new ways of thinking because it doesn't think like
> humans. In fact, at present AI *can't* think like humans. Some
> research is going into trying to make AI think more like humans, but
> others question whether this is more useful than having two
> alternative thinking methods.
While there is a sense where every mind develops in own way of
thinking, AI is the 3rd instance of the convergent evolution of
intelligence. Invertebrate intelligence evolved in the cephalopods
such as squid and octopi hundreds of millions of years ago, vertebrate
intelligence such as in the higher birds and mammals, developed
separately tens of millions of years ago, and silicate intelligence is
in the process of being developed as we speak. Each of these main
branches of intelligence thinks in ways very different from one
another but also shares common design elements such as networked
neurons. This leads to certain mathematical commonalities.
> This article gives an interesting overview of the current state.
> <https://theconversation.com/were-told-ai-neural-networks-learn-the-way-humans-do-a-neuroscientist-explains-why-thats-not-the-case-183993?>
> We?re told AI neural networks ?learn? the way humans do. A
> neuroscientist explains why that?s not the case
> Published: June 6, 2022
> Author James Fodor
> PhD Candidate in Cognitive Neuroscience, The University of Melbourne
I agree with Fodor in that AI relies on relatively clean, complete,
and conditioned data sets for its training whereas human intelligence
evolved to operate with incomplete noisy data. But I disagree with his
idea that the volume of the information is substantially different.
Because the human brain navigates an analog world, the raw amount of
information that gets filtered, sifted, and processed by our sensory
systems is on par with the amount of sheer information processed by
AI. So AI does not require more data, it needs more refined data
because it lives in a more abstract digital world instead of a real
analog world.
Another thing that Fodor is mistaken about is with regard supervised
learning. While at first glance it seems that biological brains do not
engage in supervised learning, not all AI do either. But my own
research has identified a process in natural intelligence that plays a
role analogous to supervised learning in AI. That process is sleep and
specifically REM or dreaming sleep. Sleep and dreaming are like
supervised training in that the inputs and outputs of the intelligence
are compared to a labelled training set. While in AI, the labels are
literal text labels supplied by programmers, in natural intelligence
the labels are emotional or instinctual imprints associated with an
image or so other sensory perception. So for example the sight of a
tiger charging at a natural intelligence would be labelled as the
emotion of fear.
In AI, the training set is supplied by the programmers. In natural
intelligence, the training set is supplied by evolved natural drives
such as hunger, thirst, sex, fear of death, the desire for wealth, and
social status. In humans, these drives are manifested by the
subconscious or unconscious mind associated with more primitive
regions of the brain. In both instances, the outcomes of the
intelligent agent's decisions are compared to the training set to
generate an "error function" and the error function that results is
minimized by changing the synaptic weights through back propagation in
AI, or remodeling synapses in the case of biological brains.
It is well-established that most synaptic remodeling in animals occur
during sleep where items in short term memory get transferred to
long-term memory, etc. REM sleep is like back propagation in that
actual outputs are compared to desired outputs and synaptic weights
are adjusted to minimize the difference between actual and desired
outputs based on a given input.
Therefore the theory of mathematical learning functions predicts that
a period of offline downtime where synapses are adjusted to minimize
the error function and optimize the learning function is a hallmark of
most neural network-based intelligence whether natural or artificial.
In a sense, androids do dream of electric sheep.
Stuart LaForge
More information about the extropy-chat
mailing list