[ExI] thawing of ai winter?
f20170964 at pilani.bits-pilani.ac.in
Sat Apr 18 11:16:35 UTC 2020
The convergence of GANs is still not proved, correct?
>As the brain matures in contact with the real world, each of the pre-wired
layers is then fleshed out, and thanks to the pre-wired scaffold the amount
of data needed to optimize each layer is low.
We don't yet know distributed information is coded in brains, but we do
know that the cortex is not so specialized as to be able to represent only
one modality in one region. For example, consider experiments of re-wiring
to enable seeing with tongue, etc. So, is it pre-wired, really?
>Observing a child who learns from just a few examples gives us the
mistaken impression that the brain can utilize data to learn much more
efficiently than deep learning networks but in fact we see the culmination
of about 600 million years of evolution, learning on quadrillions
(gazillions?) of data points to first create a pre-wired scaffold and then
fill out the blanks with data from a few years of sensory input.
How could the brain obtain relevant information from ONE example in a
CHANGING environment and transform that signal to some representation that
can be stored and retrieved later? I think that we form invariants in our
brains. For example, I can tilt my head and look at my laptop from a
different angle, obtaining a totally different image, yet I instantly
recognize it as my laptop.
>The other way is to look at the pre-wired brain structure and try to
transfer insights from there to pre-wire a deep learning network.
What do you mean pre-wired? Our brains are constantly changing, in fact,
every bye of information you process causes a physical change in the brain
On Sat, Apr 18, 2020 at 6:23 AM Rafal Smigrodzki via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Fri, Apr 17, 2020 at 4:18 PM Kunvar Thaman via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> And deep learning is *incredibly* data hungry. I've worked in a start-up
>> trying to build self driving cars (I worked on navigation) and i had to
>> quit because of how frustrating it got - it really showed me how much data
>> deep learning demands to be good and how brittle it is. We're very far from
>> a AI.
> ### This is an interesting question, in fact, it's a trillion-dollar
> question. Why does deep learning work so incredibly well in little walled
> gardens (chess, go, DotA) and yet it struggles in real-time real-world
> Here is my guess - deep learning learns from scratch, optimizes for every
> new problem from the ground up. If the problem is circumscribed in terms of
> its complexity and yet data-rich, it can meander its way to a
> reasonably efficient solution quickly, especially if it can generate its
> own data, as in GANs. But when the problem has many layers of complexity,
> with different rules at each level, deep learning algorithms struggle at
> building the many-layered structures needed to address each level of
> Biological brains are pre-wired genetically to create the layers of
> processors needed to cope with tasks they evolved to solve. As the brain
> matures in contact with the real world, each of the pre-wired layers is
> then fleshed out, and thanks to the pre-wired scaffold the amount of data
> needed to optimize each layer is low. Observing a child who learns from
> just a few examples gives us the mistaken impression that the brain can
> utilize data to learn much more efficiently than deep learning networks but
> in fact we see the culmination of about 600 million years of evolution,
> learning on quadrillions (gazillions?) of data points to first create a
> pre-wired scaffold and then fill out the blanks with data from a few years
> of sensory input.
> This would mean that brains are not fundamentally better than deep
> learning in their ability to utilize data in general, however, they have
> the advantage of using solutions pre-computed by evolution in real-world
> applications. So to replicate the success of biological brains we don't
> need to invent anything radically different from deep learning but we do
> need to find the proper high-level layered scaffold specifically effective
> for real-world data and then graft deep learning onto that scaffold.
> There may be two ways of doing it. The Tesla way is to use gazillions of
> data points from millions of drivers to force the deep learning network to
> generate all the solutions for multi-layered analysis of the world, in a
> way recapitulating the evolution from the lowly worm all the way to
> near-human understanding. Since the original evolutionary process took 600
> million years, this method might take longer than expected. The other way
> is to look at the pre-wired brain structure and try to transfer insights
> from there to pre-wire a deep learning network.
> Does it make sense?
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat