[ExI] The symbol grounding problem in strong AI

Ben Zaiboc bbenzai at yahoo.com
Mon Dec 14 14:09:57 UTC 2009


> From: Damien Broderick <thespike at satx.rr.com> wrote:

> On 12/13/2009 2:55 PM, Ben Zaiboc wrote:
> 
> > if you don't think a program can solve the
> mysteriously difficult 'symbol grounding problem', how can a
> brain do it?  Are you saying that a system that
> processes and transmits signals as ion potential differences
> can do things that a system that processes and transmits
> signals as voltages can't?  What about electron spins?
> photon wavelengths? polarisation angles? quantum states?
> magnetic polarities? etc., etc.
> >
> > Is there something special that ion potential waves
> have over these other ways of representing and processing
> information?
> >
> > If so, what?
> 
> I have a lot of sympathy with Gordon's general point,
> although I think 
> the Chinese Room completely messes it up. The case is that
> a linear, 
> iterative, algorithmic process is the wrong kind of thing
> to instantiate 
> what happens in a brain during consciousness (and the rest
> of the time, 
> for that matter). It's some years since I looked into this
> problematic 
> closely, but as I recall the line of thinking developed by
> Hopfield and 
> Freeman etc still looked promising: basins of attraction,
> allowing 
> multiple inputs to coalesce and mutually transform synaptic
> maps, vastly 
> parallel. Maybe a linear process could emulate this
> eventually, but I 
> imagine one might run into the same kinds of computational
> and memory 
> space explosions that afflict  an internalized Chinese
> Room guy. Anders 
> surely has something timely to say about this.


If I understand you right, this boils down to parallel vs. linear programming?

There are two answers to this, one is that there's no reason we can't build massively parallel computer systems (we don't do much of this at present because we don't really need to), and the second is, as you say, "a linear process could emulate this".  In fact we have many examples of just this.  Just about every neural network program does it.

I'd expect a realistic software mind would exploit both methods, but even if you take the extreme case and say linear programming could *never* successfully emulate a parallel system of sufficient complexity to embody a mind, so what?  We just use physically parallel hardware components, just like the brain does.  A big job, yes, and beyond our current capabilities, yes, but not for long.

And when that time comes, you have two computers, one synthetic, one biological.  Given similar programming (whether that be in the form of physical wiring arrangements, chemical sequences, or software-controlled logic units), what reason is there to think one can do something the other can't?

I can recommend Steve Grand's book "Life and how to make it" for a good insight into how information processes can ascend through levels of abstraction, resulting in something completely different from the original process.  This helped me see how ions going through holes in a membrane can result in me writing this email, and that makes it much easier to understand that electrons in logic gates can have the same effects, through a cascade of layers of abstraction.

Asking "how can a computer program possibly give rise to consciousness?" is a bit like asking how can hydrogen bonding possibly give rise to El Niño.

Ben Zaiboc




      



More information about the extropy-chat mailing list