[ExI] sciam blog article

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Tue Mar 29 07:49:31 UTC 2016


On Sat, Mar 26, 2016 at 11:32 PM, Robin D Hanson <rhanson at gmu.edu> wrote:

>
> On Mar 26, 2016, at 2:06 AM, Rafal Smigrodzki <rafal.smigrodzki at gmail.com>
> wrote:
>
> ### Some parts of the brain, such as the midbrain and structures inferior
> to it, are non-modular, spaghetti-like and hardwired in details -
> genetically determined and running on completely different principles from
> the cortex. The cortex and parts of the basal ganglia are however highly
> modular and most likely running a relatively uniform underlying algorithm
> that determines both short-term function and the longer-term processes,
> such as rewiring of the cortex.
>
>
> Yes, some parts may be simple, and even occupy a large fraction of the
> brain. Even so other parts may no be, and even if they occupy a small
> fraction of the brain, it may take a long time to figure out how to create
> systems that substitute effectively for them. I discuss this more at:
> https://www.overcomingbias.com/2016/03/how-good-99-brains.html
>

### I wholeheartedly agree with the premises you outline in your blog post
above but I would disagree with the overall conclusion.

Indeed, here we encounter issues related to distinct levels of the
organization of matter and information. The lower parts of the brain encode
knowledge learned in the course of evolution, stored genetically, they are
not malleable in individuals (i.e. allow only very limited individual
learning) and as noted previously, are not very modular. The cortex encodes
relatively small amounts of evolutionary knowledge which allows the
construction of an individual learning engine that relies on highly modular
structure.

Generally speaking, deciphering genetically encoded knowledge is very
difficult. I spent a few years of my life on a failed attempt at finding
genes involved in wiring a part of the brain, which even if successful
would be only a small first step towards figuring out how it works. The
techniques we use for this search (e.g. optogenetically modified mice) are
tedious and extremely time consuming. It takes a long time, from 6 months
to a couple of years, to tweak a mouse, read out the effects and go back
for the next round of learning. Plus you need a large infrastructure, a
university neuroscience lab to perform the experiments. Indeed, as you
write, it takes a long time to figure out how the brainstem works, because
this brain parts deploys from a relatively large genetic database.

On the other hand, learning about information processing in silico is much
easier. The techniques for learning in silico boil down to tweaking code,
running it and seeing what sticks. It might take as little as 5 minutes
from making a change in code to seeing the initial results, and multiple
rounds of learning can be accomplished with meagre equipment, a workstation
and a coffee maker. We are talking about 4 orders of magnitude differences
in the time and cost of learning between learning about gene-encoded
knowledge and learning within human-invented knowledge.

But, luckily for the AI designer, the genetically complex brain parts are
not important for being smart. They are there to integrate information from
your gut and tell the gut to move, not to recognize images and perform
rocket science. What we call intelligence resides in the cortex and its
interaction with some forebrain ganglia, the genetically simple parts. As
John aptly remarked, the jump from chimpanzee intelligence to human
intelligence is encoded in much less than 9 MB of code. I would guess that
the tweak from chimp to human might be as little as 9 kB of code.

For AI design we do not need to find out much about the brain. Hardly any
AI advances were directly driven by neuroscience. AI researchers search a
configuration space of information processing structures in general, with
only a vague inspiration from biology. Thanks to their 4 order of magnitude
advantage in learning speed over neuroscientists they independently created
enough knowledge about intelligence to beat humans in so many tasks that
the end of the road might be soon coming in view.

This is why I am relatively optimistic about prospects for AI and less
optimistic about progress in neuroscience, at least until we can upload
neuronal circuits into computers and start experimenting on them in silico,
rather than in mice. We will have general AI long before we manage to
upload a mouse, much less a human.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160329/98a2512d/attachment.html>


More information about the extropy-chat mailing list