[ExI] thawing of ai winter?

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Sat Apr 18 00:51:04 UTC 2020


On Fri, Apr 17, 2020 at 4:18 PM Kunvar Thaman via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> And deep learning is *incredibly* data hungry. I've worked in a start-up
> trying to build self driving cars (I worked on navigation) and i had to
> quit because of how frustrating it got - it really showed me how much data
> deep learning demands to be good and how brittle it is. We're very far from
> a AI.
>

### This is an interesting question, in fact, it's a trillion-dollar
question. Why does deep learning work so incredibly well in little walled
gardens (chess, go, DotA) and yet it struggles in real-time real-world
applications?

Here is my guess - deep learning learns from scratch, optimizes for every
new problem from the ground up. If the problem is circumscribed in terms of
its complexity and yet data-rich, it can meander its way to a
reasonably efficient solution quickly, especially if it can generate its
own data, as in GANs. But when the problem has many layers of complexity,
with different rules at each level, deep learning algorithms struggle at
building the many-layered structures needed to address each level of
complexity.

Biological brains are pre-wired genetically to create the layers of
processors needed to cope with tasks they evolved to solve. As the brain
matures in contact with the real world, each of the pre-wired layers is
then fleshed out, and thanks to the pre-wired scaffold the amount of data
needed to optimize each layer is low. Observing a child who learns from
just a few examples gives us the mistaken impression that the brain can
utilize data to learn much more efficiently than deep learning networks but
in fact we see the culmination of about 600 million years of evolution,
learning on quadrillions (gazillions?) of data points to first create a
pre-wired scaffold and then fill out the blanks with data from a few years
of sensory input.

This would mean that brains are not fundamentally better than deep learning
in their ability to utilize data in general, however, they have the
advantage of using solutions pre-computed by evolution in real-world
applications. So to replicate the success of biological brains we don't
need to invent anything radically different from deep learning but we do
need to find the proper high-level layered scaffold specifically effective
for real-world data and then graft deep learning onto that scaffold.

There may be two ways of doing it. The Tesla way is to use gazillions of
data points from millions of drivers to force the deep learning network to
generate all the solutions for multi-layered analysis of the world, in a
way recapitulating the evolution from the lowly worm all the way to
near-human understanding. Since the original evolutionary process took 600
million years, this method might take longer than expected. The other way
is to look at the pre-wired brain structure and try to transfer insights
from there to pre-wire a deep learning network.

Does it make sense?

Rafal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20200417/67fafa1c/attachment-0001.htm>


More information about the extropy-chat mailing list