[ExI] The symbol grounding problem in strong AI

Damien Broderick thespike at satx.rr.com
Sun Dec 13 21:41:01 UTC 2009


On 12/13/2009 2:55 PM, Ben Zaiboc wrote:

> if you don't think a program can solve the mysteriously difficult 'symbol grounding problem', how can a brain do it?  Are you saying that a system that processes and transmits signals as ion potential differences can do things that a system that processes and transmits signals as voltages can't?  What about electron spins? photon wavelengths? polarisation angles? quantum states? magnetic polarities? etc., etc.
>
> Is there something special that ion potential waves have over these other ways of representing and processing information?
>
> If so, what?

I have a lot of sympathy with Gordon's general point, although I think 
the Chinese Room completely messes it up. The case is that a linear, 
iterative, algorithmic process is the wrong kind of thing to instantiate 
what happens in a brain during consciousness (and the rest of the time, 
for that matter). It's some years since I looked into this problematic 
closely, but as I recall the line of thinking developed by Hopfield and 
Freeman etc still looked promising: basins of attraction, allowing 
multiple inputs to coalesce and mutually transform synaptic maps, vastly 
parallel. Maybe a linear process could emulate this eventually, but I 
imagine one might run into the same kinds of computational and memory 
space explosions that afflict  an internalized Chinese Room guy. Anders 
surely has something timely to say about this.

Damien Broderick




More information about the extropy-chat mailing list