[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Sat Dec 19 01:05:45 UTC 2009


--- On Fri, 12/18/09, Damien Broderick <thespike at satx.rr.com> wrote:

> > If programs drive your artificial neurons (and they
> do) then Searle rightfully challenges you to show how those
> programs that drive behavior can in some way constitute a
> mind, i.e., he challenges you to show that you have not
> merely invented weak AI, which he does not contest.
> 
> I see that Gordon ignored my previous post drawing
> attention to the Hopfield/Walter Freeman paradigms.

I didn't ignore it, Damien. I just have very little time and lots of posts to respond to, not only here but on other discussion lists. In your post before your last, you wrote something along the lines of "BULLSHIT". (No, I take that back. That's exactly what you wrote.) I don't mind a little profanity, and I didn't take offense, but as a general rule I tend to give priority to posts of those who seem most interested in what I have to say. 


> I'll add this comment anyway: it is not at all clear to me that
> neurons and other organs and organelles are computational
> (especially in concert), even if their functions might be
> emulable by algorithms. Does a landslide calculate its path
> as it falls under gravity into a valley? Does the atmosphere
> perform a calculation as it help create the climate of the
> planet? I feel it's a serious error to think so, even though
> the reigning metaphors among physical scientists and
> programmers make it inevitable that this kind of metaphor or
> simile (it's not really a model) will be mistaken for an
> homology. I suspect that this is the key to whatever it is
> that puzzles Searle and his acolytes, which I agree is a
> real puzzle. 

Well, I don't see that as very relevant, but then I base my opinion only on what you've written above.

Searle considers it trivially true that we could in principle create a perfectly accurate computer simulation of the brain. I don't see that he would think it any less trivial if we could not, though it would certainly put a damper on the strong AI research program that he already considers a waste of time. 

Good to see that you agree a "real puzzle" exists. That tells me you understand I'm not just bullshitting.

-gts




      



More information about the extropy-chat mailing list