[ExI] Oxford scientists edge toward quantum PC with 10b qubits

Richard Loosemore rpwl at lightlink.com
Fri Jan 28 16:45:26 UTC 2011


Eugen Leitl wrote:
> On Fri, Jan 28, 2011 at 10:28:27AM -0500, Richard Loosemore wrote:
> 
>> You are referring to the idea that building an AGI is about "simply"  
> 
> Simply, and, no, there's nothing simple about that.
> 
>> duplicating the human brain?  And that therefore the main obstacle is  
>> having the hardware to do that?
> 
> In general, the brain is doing something. It is metabolically
> constrained, so it cannot afford being grossly inefficient.
> If you look at the details, then you see it's operating pretty
> close at the limits of what is possible in biology. And biology
> can run rings around our current capabilities in many critical
> aspects.

The brain may be operating efficiently at the chemistry level, but that 
says nothing about the functional level.

The constraints that the "design" of the human brain optimized were 
determined by accidents of evolution (no metallic-conductor or optical 
signal lines, for one thing).  That does not mean that the functional 
level can only be duplicated with the same hardware.

>> This is an approach that might be called "blind replication".  Copying  
>> without understanding.
>> I tried to do that once, when I was a kid.  I built an electronic  
>> circuit using a published design, but with no clue how the components  
>> worked, or how the system functioned.
>>
>> It turned out that there was one small problem, somewhere in my  
>> implementation.  Probably just the one.  And so the circuit didn't work.  
>>  And since I was blind to the functionality, there was absolutely  
>> nothing I could do about it.  I had no idea where to look to fix the  
>> problem.
> 
> Ah, but we we know quite a lot, and there's a working instance in front
> of us to go check to compare notes.

But this was exactly my original point.  We do not know quite a lot: 
the theory, and the engineering understanding of brain functionality, is 
all shot to pieces.  I am stating this as a matter of personal 
experience with this research field .... stating my opinion of the 
current state of the art, from my perspective as a cognitive scientist. 
  I may be wrong about the appalling state of our current understanding, 
but if you and are debating just how good or how bad that understanding 
is, then the level of understanding IS the issue.

Yes, there is a working instance in front of us (and, of all the people 
at the last AGI conference I went to, I may well have been the one who 
studies the design of that working instance in the most detail), it 
turns out to be very hard to use that example, because interpreting the 
signals that we can get access to is fantastically hard.

So interpreting the information that we get from looking at the human 
brain is -- and this was part of my original point -- extremely 
theory-dependent.

We both agree, I think, that if the folks at the Whole Brain Emulation 
project were to get a single human brain analog working, and if it 
should happen that this replica did nothing but gibber, or 
free-associate, or spend half its time in epileptic fits, debugging that 
system would require some understanding of how it worked.  At that 
point, understanding the functionality would be everything.

You express optimism that "we know quite a lot", etc etc.

I disagree.  I have seen what the neuroscience people (and the 
theoretical neuroscience people) have in the way of a theory, and it is 
so weak it is not even funny.  A resurrection of old-school behaviorism, 
and a lot of statistical signal analysis.  That is it.


>> To do AGI you need to understand what you are building.  The idea of  
> 
> Absolutely disagree. I don't think there's anything understandable in
> there, at least simply understandable. No neat sheet of equations to
> write down, and then then run along corridors naked, hollering EUREKA
> at the top of your lungs.

But where do you come from when you say this?

Do you have two or three decades of detailed understanding of cognitive 
science under your belt, so we can talk about, for example, the role of 
the constraint-satisfaction metaphor in connectionism, or the 
complex-systems problem and its ompact on models of cognition?

Have you tried using weak constraint models to understand a broad range 
of cognitive phenomena?  Do you have a feel, yet, for how many of them 
seem amenable to that treatment, in a unified way?

I'm ready to engage in debates at that level, if you want, so we can 
argue about the current state of progress.  But what I hear from you is 
a complaint about the lack of understandability of the human cognitive 
system, from someone who is not even part of the community that is 
trying! ;-)



Richard Loosemore



More information about the extropy-chat mailing list