[ExI] Computational resources needed for AGI...
kellycoinguy at gmail.com
Sun Feb 6 07:23:08 UTC 2011
On Sat, Feb 5, 2011 at 9:50 AM, Richard Loosemore <rpwl at lightlink.com> wrote:
> Kelly Anderson wrote:
>> On Wed, Feb 2, 2011 at 9:56 AM, Richard Loosemore <rpwl at lightlink.com>
>>> Kelly Anderson wrote:
>> I doubt it, only in the sense that we don't have anything with near
>> the raw computational power necessary yet. Unless you have really
>> compelling evidence that you can get human-like results without
>> human-like processing power, this seems like a somewhat empty claim.
> Over the last five years or so, I have occasionally replied to this question
> with some back of the envelope calculations to back up the claim. At some
> point I will sit down and do the job more fully, and publish it, but in the
> mean time here is your homework assignment for the week.... ;-)
> There are approximately one million cortical columns in the brain. If each
> of these is designed to host one "concept" at a time, but with at most half
> of them hosting at any given moment, this gives (roughly) half a million
> active concepts.
I am not willing to concede that this is how it works. I tend to
gravitate towards a more holographic view, i.e. that the "concept" is
distributed across tens of thousands of cortical columns, and that the
combination of triggers to a group of cortical columns is what causes
the overall "concept" to emerge. This is a general idea, and may not
apply specifically to cortical columns, but I think you get the idea.
The reason for belief in the holographic model is that brain damage
doesn't knock out all memory or ability to process if only part of the
brain is damaged. This neat one to one mapping of concept to neuron
has been debunked to my satisfaction some time ago.
> If each of these is engaging in simple adaptive interactions with the ten or
> twenty nearest neighbors, exchanging very small amounts of data (each
> cortical column sending out and receiving, say, between 1 and 10 KBytes,
> every 2 milliseconds), how much processing power and bandwidth would this
> require, and how big of a machine would you need to implement that, using
> today's technology?
You are speaking of only one of the thirty or so organelles in the
brain. The cerebral cortex is only one part of the overall picture.
Nevertheless, you are obviously not talking about very much
computational power here. Kurzweil in TSIN does the back of the
envelope calculations about the overall computational power of the
human brain, and it's a lot more than you are presenting here.
> This architecture may well be all that the brain is doing. The rest is just
> overhead, forced on it by the particular constraints of its physical
I have no doubt that as we figure out what the brain is doing, we'll
be able to optimize. But we have to figure it out first. You seem to
jump straight to a solution as a hypothesis. Now, having a hypothesis
is a good part of the scientific method, but there is that other part
of testing the hypothesis. What is your test?
> Now, if this conjecture is accurate, you tell me how long ago we had the
> hardware necessary to build an AGI.... ;-)
I'm sure we have that much now. The problem is whether the conjecture
is correct. How do you prove the conjecture? Do something
"intelligent". What I don't see yet in your papers, or in your posts
here, are results. What "intelligent" behavior have you simulated with
your hypothesis Richard? I'm not trying to be argumentative or
challenging, just trying to figure out where you are in your work and
whether you are applying the scientific method rigorously.
> The last time I did this calculation I reckoned (very approximately) that
> the mid-1980s was when we crossed the threshold, with the largest
> supercomputers then available.
That may be the case. And once we figure out how it all works, we
could well reduce it to this level of computational requirement. But
we haven't figured it out yet.
By most calculations, we spend an inordinate amount of our cerebral
processing on image processing the input from our eyes. Have you made
any image processing breakthroughs? Can you tell a cat from a dog with
your approach? You seem to be focused on concepts and how they are
processed. How does your method approach the nasty problems of image
classification and recognition?
More information about the extropy-chat