[ExI] Computational resources needed for AGI...

Kelly Anderson kellycoinguy at gmail.com
Mon Feb 7 05:36:58 UTC 2011


On Sun, Feb 6, 2011 at 7:33 AM, Richard Loosemore <rpwl at lightlink.com> wrote:
> Kelly Anderson wrote:
> That said, there are questions.  If something is distributed, is it (a) the
> dormant, generic "concepts" in long term memory, or is it the active,
> instance "concepts" of working memory?  Very big difference.  I believe
> there are reasons to talk about the long term memory concepts as being
> partialy distributed, but that would not apply to the instances in working
> memory.....   and in the above architecture I was talking only about the
> latter.

Ok. I can follow that working memory is likely not holographic. That
actually makes sense. Memory and other long term  storage probably is
though.

> If you try to push the idea that the instance atoms (my term for the active
> concepts) are in some sense "holographic" or distributed, you get into all
> sorts of theoretical and practical snarls.

I'll have to take your word for that.

> I published a paper with Trevor Harley last year in which we analyzed a
> paper by Quiroga et al, that made claims about the localization of concepts
> to neurons.  That paper contains a more detailed explanation of the mapping,
> using ideas from my architecture.  It is worth noting that Quiroga et al's
> explanation of their own data made no sense, and that the alternative that
> Trevor and I proposed actually did account for the data rather neatly.

I think I read this paper or one with very similar concepts that you wrote.

>> Kurzweil in TSIN does the back of the
>> envelope calculations about the overall computational power of the
>> human brain, and it's a lot more than you are presenting here.
>
> Of course!
>
> Kurzweil (and others') calculations are based on the crudest possible
> calculation of a brain emulation AGI, in which every wretched neuron in
> there is critically important, and cannot be substituted for something
> simpler.  That is the dumb approach.

Kurzweil does two separate calculations, one is a VERY brute force
simulation, and the other is a more functional approach. I think they
differed by around four orders of magnitude. You are talking about
several more orders of magnitude less computation. And, while I don't
have enough information about your approach to determine if it will
work (I assume you don't either) it seems that you are attempting a
premature optimization. Let's get something working first, then
optimize it later.

> What I am trying to do is explain an architecture that comes from the
> cognitive science level, and which suggests that the FUNCTIONAL role played
> by neurons is such that it can be substituted very adequately by a different
> computational substrate.
>
> So, my claim is that, functionally, the human cognitive system may consist a
> network of about a million cortical column units, each of which engages in
> relatively simple relaxation processes with neighbors.
>
> I am not saying that this is the exactly correct picture, but so far this
> architecture seems to work as a draft explanation for a broad range of
> cognitive phenomena.
>
> And if it is correct, the the TSIN calculations are pointless.

Sure.

>> I have no doubt that as we figure out what the brain is doing, we'll
>> be able to optimize. But we have to figure it out first. You seem to
>> jump straight to a solution as a hypothesis. Now, having a hypothesis
>> is a good part of the scientific method, but there is that other part
>> of testing the hypothesis. What is your test?
>
> Well, it may seem like I pulled the hypothesis out of the hat yesterday
> morning, but this is actually just a summary of a project that started in
> the late 1980s.
>
> The test is an examination of the consistency of this architecture with the
> known data from human cognition.  (Bear in mind that most artificial
> intelligence researchers are not "scientists" .... they do not propose
> hyotheses and test them ..... they are engineers or mathematicians, and what
> they do is play with ideas to see if they work, or prove theorems to show
> that some things should work.  From that perspective, what I am doing is
> real science, of a sort that almost died out in AI a couple of decades ago).
>
> For an example of the kind of tests that are part of the research program I
> am engaged in, see the Loosemore and Harley paper.

I can't argue with that. Darwin sat on his hypothesis for decades
until he had it just right. If you want to do the same, then more
power to you.

My question remains though, have you any preliminary results you can
share that indicates that your system functions?

>>> Now, if this conjecture is accurate, you tell me how long ago we had the
>>> hardware necessary to build an AGI.... ;-)
>>
>> I'm sure we have that much now. The problem is whether the conjecture
>> is correct. How do you prove the conjecture? Do something
>> "intelligent". What I don't see yet in your papers, or in your posts
>> here, are results. What "intelligent" behavior have you simulated with
>> your hypothesis Richard? I'm not trying to be argumentative or
>> challenging, just trying to figure out where you are in your work and
>> whether you are applying the scientific method rigorously.
>
> The problem of giving you and answer is complicated by the paradigm.  I am
> adopting a systematic top-down scan that starts at the framework level and
> proceeds downward.  The L & H paper shows an application of the method to
> just a couple of neuroscience results.  What I have here are similar
> analyses of several dozen other cognitive phenomena, in various amounts o
> detail, but these are not published yet.  There are other stages to the work
> that involve simulations of particular algorithms.

Simulations of algorithms seems promising. Can you say more about that?

> This is quite a big topic.  You may have to wait for my thesis to be
> published to get a full answer, because fragments of it can be confusing.

I started my Thesis in 1988. It hasn't been finished either. :-) I
have published one paper though...

> All I can say at the moment is that the architecture gives rise to simple,
> elegant explanations, at a high level, of a wide range of cognitive data,
> and the mere fact that one architecture can do such a thing is, in my
> experience, unique.  However, I do not want to publish that as it stands,
> because I know what the reaction would be if there is no further explanation
> of particular algorithms, down at the lowest level.  So, I continue to work
> toward the latter, even though by my own standards I already have enough to
> be convinced.

If you are right, it will be worth waiting for. If you aren't sharing
details as you go, then it will be harder for you to get help from
others.

>> That may be the case. And once we figure out how it all works, we
>> could well reduce it to this level of computational requirement. But
>> we haven't figured it out yet.
>>
>> By most calculations, we spend an inordinate amount of our cerebral
>> processing on image processing the input from our eyes. Have you made
>> any image processing breakthroughs? Can you tell a cat from a dog with
>> your approach? You seem to be focused on concepts and how they are
>> processed. How does your method approach the nasty problems of image
>> classification and recognition?
>
> The term "concept" is a vague one.  I used it in our discussion because it
> is conventional.  However, in my own writings I talk of "atoms" and
> "elements", because some of those atoms correspond to very low-level
> features such as the ones that figure in the visual system.

Do you have any results in the area of image processing?

> As far as I can tell at this stage, the visual system uses the same basic
> architecture, but with a few wrinkles.  One of those is mechanism to spread
> locally acquired features into a network of "distributed, position-specific"
> atoms.  This means that when visual regularities are discovered, they
> percolate down in the system and become distributed across the visual field,
> so they can be computed in parallel.

That sounds right.

> Also, the visual system does contain some specialized pathways (the "what"
> and "where" pathways) that engage in separate computations. These are
> already allowed for in the above calcuations, but they are specialized
> regions of that million-column system.
>
> I had better stop.  Must get back to work.

Sounds like the right approach... :-)  If you are convinced, don't let
naysayers get you down. But to get rid of the "it will never fly"
crowd, you have to get something out of the lab eventually. Good luck
Richard.

-Kelly




More information about the extropy-chat mailing list