[ExI] Oxford scientists edge toward quantum PC with 10b qubits

Eugen Leitl eugen at leitl.org
Fri Jan 28 16:15:49 UTC 2011


On Fri, Jan 28, 2011 at 10:28:27AM -0500, Richard Loosemore wrote:

> You are referring to the idea that building an AGI is about "simply"  

Simply, and, no, there's nothing simple about that.

> duplicating the human brain?  And that therefore the main obstacle is  
> having the hardware to do that?

In general, the brain is doing something. It is metabolically
constrained, so it cannot afford being grossly inefficient.
If you look at the details, then you see it's operating pretty
close at the limits of what is possible in biology. And biology
can run rings around our current capabilities in many critical
aspects.

As a student of artificial and natural systems I am painfully
aware of current limitations to computers.

> This is an approach that might be called "blind replication".  Copying  
> without understanding.

If you can instantiate an expert at the drop of a hat that's
pretty good in my book.

> I tried to do that once, when I was a kid.  I built an electronic  
> circuit using a published design, but with no clue how the components  
> worked, or how the system functioned.
>
> It turned out that there was one small problem, somewhere in my  
> implementation.  Probably just the one.  And so the circuit didn't work.  
>  And since I was blind to the functionality, there was absolutely  
> nothing I could do about it.  I had no idea where to look to fix the  
> problem.

Ah, but we we know quite a lot, and there's a working instance in front
of us to go check to compare notes.

> To do AGI you need to understand what you are building.  The idea of  

Absolutely disagree. I don't think there's anything understandable in
there, at least simply understandable. No neat sheet of equations to
write down, and then then run along corridors naked, hollering EUREKA
at the top of your lungs.

> successfully replicating a system as fantastically complex as the human  
> brain, without first sorting out the FUNCTIONALITY -- i.e. the software  

That's just the point, there is no software. It's a physical system with
state, which implements different processes at different temporal scales
using whatever was close at hand at the time it needed it.

> -- is a hollow dream.
>
> (Not to mention that virtually nobody in the AGI community is actually  
> trying to do that right now.  WBE is done by neuroscientists who seem  
> not to have thought about these issues much, and they don't call what  
> they do "AGO")

When a field has stuck, it is frequently people from another field who
come in, and bring back the torch of illumination.

>
>
>>> The relevance of hardware advances like this is completely unknown 
>>> until  a working design can be supplied.
>>
>> If you try to track what a given piece of neocortex is doing
>> in current hardware you will realize that you need a lot of crunch.
>
> Track what the neocortex is doing?  I am doing that.  That is my  
> research.... except that I am doing it at the high-level, functional  

I don't see how you can extract anything at the high level without
looking at ultrascale to molecular scale.

> level.  What I am trying to do is understand how the neocortex works,  
> not how the signals are chasing each other around.  Those are two  

I don't think there's anything there to understand, but of course
I don't know that for sure. So you're doing a valuable effort.

> different things, like the difference between electronic engineering and  
> software engineering.

Biology doesn't do OSI layers.

> And, so far, it looks as though the cortex may be playing a functional  
> role that can be implemented with a few orders of magnitude less  
> hardware than the brain uses.

Do you have a nice publication track we can take a look at?

>>> We are a long way away from AGI, unless people start to wake up to 
>>> the  farcical state of affairs in artificial intelligence at the 
>>> moment.
>>
>> Finally something we can agree on.
>
> Well, we agree on this (as you probably know) for completely different  
> reasons.

Maybe, maybe not.

> At least I think we do.  If you are saying this because you agree with  
> the critique in my complex systems paper, I will be a pleasantly  

Do you have a reference for your complex systems paper to share?

> surprised person today.
>
>
> Richard Loosemore
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE



More information about the extropy-chat mailing list