[ExI] Wernicke's aphasia and the CRA

Damien Broderick thespike at satx.rr.com
Tue Dec 8 01:21:05 UTC 2009


On 12/7/2009 6:39 PM, Gordon Swobe wrote:

> Searle would laugh and say that's like saying water can never freeze into a solid, not in a billion years and not in a trillion. On his view, the brain takes on the property of consciousness in a manner analogous to that by which water takes on the property of solidity.

You summarize his position nicely. However, this ignores what actually 
happens with people learning a new skill, especially a new language. At 
first, the elements are laboriously memorized, then chunked, then the 
activity is practised, and at a certain point the process does indeed... 
crystalize. Your consciousness alters. You are no longer arduously 
performing an algorithm, you're *reading* or *speaking* the other 
language (or playing tennis, not just hitting the ball).

You claimed on Searle's behalf:

 >Some Chinese guy comes along and says "squiggle" so Cram dutifully 
looks it up in his mental look up table and replies "squaggle". ... But 
Cram has no idea what "squiggle" or "squaggle" means. From his point of 
view, he's no different from the Wernicke aphasiac who speaks nonsense 
with proper syntax.<

Leaving aside the absurd scale problems with this analogy (the guy 
equals a neuron with very slow synaptic connections to other neurons), 
if this could be instantiated in an memory-augmented brain I'd expect 
that Cram would finally have an epiphanic moment of illumination and 
find that he *did* understand Chinese. Would an equivalent nonhuman 
computer? Maybe not, unless it emulated an embodied brain with an 
in-built grammar menu, just like us at birth. And at that point we seem 
to rejoin Searle in his agreement that AI is possible, using the correct 
causal architecture and powers.

Damien Broderick




More information about the extropy-chat mailing list