[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Mon Dec 14 14:10:26 UTC 2009


Re-reading your last paragraph, Stathis, it seems you want to know what I think about replacing neurons in the visual cortex with artificial neurons that do *not* have the essential ingredient for consciousness. I would not dare speculate on that question, because I have no idea if conscious vision requires that essential ingredient in those neurons, much less what that essential ingredient might be.

I agree with your general supposition, however, that we're missing some important ingredient to explain consciousness. We cannot explain it by pointing only to the means by which neurons relate to other neurons, i.e., by Chalmer's functionalist theory, at least not at this time in history.

Functionalism seems a very reasonable religion, and reason for hope, but I don't see it as any more than that. 

-gts

--- On Mon, 12/14/09, Gordon Swobe <gts_2000 at yahoo.com> wrote:

> From: Gordon Swobe <gts_2000 at yahoo.com>
> Subject: Re: [ExI] The symbol grounding problem in strong AI
> To: "ExI chat list" <extropy-chat at lists.extropy.org>
> Date: Monday, December 14, 2009, 8:45 AM
> --- On Sun, 12/13/09, Stathis
> Papaioannou <stathisp at gmail.com>
> wrote:
> 
> > Changing from a man to a punch card reading machine
> does
> > not make a different to the argument insofar as Searle
> would 
> > still say the room has no understanding and his
> opponents 
> > would still say that it does.
> 
> The question comes back to semantics. Short of espousing
> the far-fetched theory of panspychism, no serious
> philosopher would argue that a punch card reading machine
> can have semantics/intentionality, i.e., mindful
> understanding of the meanings of words. 
> 
> People can obviously have it, however, and so Searle put a
> person into his experiment to investigate whether he would
> have it. He concluded that such a person would not have it.
> 
> I should point out here however that his formal argument
> does not depend on the thought experiment for its veracity.
> Searle just threw the thought experiment out there to help
> illustrate his point, then later formalized it into a proper
> philosophical argument sans silly pictures of men in Chinese
> rooms.
> 
> > To address the strong AI / weak AI distinction I put
> to you
> > a question you haven't yet answered: what do you think
> would happen 
> > if part of your brain, say your visual cortex, were
> replaced with
> > components that behaved normally in their interaction
> with the remaining
> > biological neurons, but lacked the essential
> ingredient for
> > consciousness?
> 
> You need to show that the squirting of neurotransmitters
> between giant artificial neurons made of beer cans and
> toilet paper will result in a mind that understands
> anything. :-) How do those squirts cause consciousness? If
> you have no scientific theory to explain it, then, well,
> we're back to Searle's default position: that as far as we
> know, only real biological brains have it.
> 
> -gts
> 
> 
>       
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 


      



More information about the extropy-chat mailing list