[ExI] sciam blog article

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Fri Apr 1 04:11:32 UTC 2016


On Wed, Mar 30, 2016 at 10:51 AM, Robin D Hanson <rhanson at gmu.edu> wrote:

You go on to argue that the cortex is our most uniquely human brain part
> and arguably the seat of our most general reasoning abilities. But even if
> these are true, they don’t at all speak to the overall abilities of a
> system which only had the equivalent of a cortex.
>

### Indeed, but these arguments support the notion that general reasoning
abilities should be achievable using a modular, relatively simple
algorithm, rather than a horrendously complex one.

-------------------

>
> If you put a deep neural network with 2000 layers on top of whatever
> powers ATLAS robots you could get a pretty close facsimile of a human mind
> in a clumsy human body.
>
>
> Here you seem to claim that everything but the cortex is relatively
> trivial - that  we already have all those abilities modeled, and all we
> need is to add a cortex to have a complete system. THAT is the claim for
> which I’d like to see evidence.
>

### Yes, a lot of the complicated stuff outside of the cortex has already
been realized in silico, but this doesn't make it trivial. It took decades
of work by thousands of researchers to get to the Robodog recovering from a
kick. And we do already have AI programs which are capable of emulating the
function of parts of the cortex, to learn highly complex behavior from
experience. In fact, I would guess that there is a large class of
cortex-like algorithms that are not highly complex, yet solve very
difficult problems one domain at a time.

I would however venture that to make a true functional substitute for a
human, rather than a copy just able to star in demo clips, AI researchers
still need to perfect a motivation system that would fit between the
AlphaGo optimization algorithm and the lower functions already embodied in
existing robot designs.

Motivation might feel like a beguilingly simple issue but the brain
structures that subserve it are actually very complex. This is the part of
the brain that has to bridge the hardwired knowledge of the brainstem and
midbrain and the completely learned function of the cortex. The high-level
commands inherent in motivation are evolutionarily conserved but their
precise implementation is a system of very high level learned behaviors.
The limbic system and multiple forebrain nuclei, many capable of learning,
and all wired in a complex, non-modular network, have to work well to steer
you between mania, depression, ADD, unduly high or too low time preference,
weighing the competing demands of finding calories, allies, and fertile
mates while avoiding enemies, including the most complicated ones, your
conspecifics.

I don't know how much more work needs to be done to achieve this
unification. Many mildly autistic individuals manage to emulate some
subcortical functions in their cortex, slowly learning the usually
instinctual social niceties. Maybe it won't be too difficult to use
existing deep learning paradigms to implement complex motivation, going
from robo-nerds to social butterflies, maybe new discoveries would need to
be made. My guess is on the former but I am no expert in the area.

The next 10 years will be a very interesting time in AI research.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160401/d5a6f620/attachment.html>


More information about the extropy-chat mailing list