[ExI] The symbol grounding problem in strong AI
Damien Broderick
thespike at satx.rr.com
Fri Dec 18 20:14:04 UTC 2009
On 12/18/2009 9:36 AM, Gordon Swobe wrote:
> If programs drive your artificial neurons (and they do) then Searle rightfully challenges you to show how those programs that drive behavior can in some way constitute a mind, i.e., he challenges you to show that you have not merely invented weak AI, which he does not contest.
I see that Gordon ignored my previous post drawing attention to the
Hopfield/Walter Freeman paradigms. I'll add this comment anyway: it is
not at all clear to me that neurons and other organs and organelles are
computational (especially in concert), even if their functions might be
emulable by algorithms. Does a landslide calculate its path as it falls
under gravity into a valley? Does the atmosphere perform a calculation
as it help create the climate of the planet? I feel it's a serious error
to think so, even though the reigning metaphors among physical
scientists and programmers make it inevitable that this kind of metaphor
or simile (it's not really a model) will be mistaken for an homology. I
suspect that this is the key to whatever it is that puzzles Searle and
his acolytes, which I agree is a real puzzle. I don't think the Chinese
Room helps clarify it, however. I haven't read much Humberto Maturana
and the Santiago theory of cognition but that might be one place to look
for some handy hints.
Damien Broderick
More information about the extropy-chat
mailing list