[ExI] Google’s Go Victory Is Just a Glimpse of How Powerful AI Will Be
johnkclark at gmail.com
Mon Feb 1 00:25:33 UTC 2016
On Sun, Jan 31, 2016 at 2:04 PM, Anders Sandberg <anders at aleph.se> wrote:
> I think Eliezer has a relevant point: he is concerned that "Human neural
> intelligence is not that complicated and current algorithms are touching on
> keystone, foundational aspects of it." - i.e. we may have found a general
> tool in deep learning that reduces the "to do" list of AGI by at least one
> line (out of an unknown number).
I also think Eliezer makes a good point (and wish he was still on the
list) but the number of lines in the brain's master algorithm is not
*completely* unknown, we can put a upper limit on how big it could be on
it. Ray Kurzweil says:
*"The amount of information in the genome (after lossless compression,
which is feasible because of the massive redundancy in the genome) is about
50 million bytes (down from 800 million bytes in the uncompressed
We also know that only 40% of the genome deals with the brain so that gets
us down to 20 million bytes; that would be a big program but not bigger
than some that people have already written. And most of that 20 million
bytes must be about basic biologic functions and how neurons and Glial
cells can stay alive rather than the all important seed algorithm that
allows us to learn. So the master algorithm must be smaller than 20 million
bytes and probably a lot smaller.
> More practically I think the Wired article gets things right: this is a
> big deal commercially. Solving tricky value functions is worth money
Yes, and big money means more competition, and more competition means more
> if they do generalize to hand-eye coordination, then we will have a
> practical robot revolution.
When that happens it seems to me many if not most most economic textbooks
would be rendered obsolete.
John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat