[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Thu Dec 17 02:21:35 UTC 2009


Stathis,

You wrote this earlier:

> So, Searle allows that the behaviour of a neuron could be
> copied by a computer program, but that this artificial neuron
> would lack the essential ingredient for consciousness. 

You then tried to refute that position that you attributed to Searle.

But did you understand that the "this artificial neuron" to which you referred exists only as a computer simulation? I.e., only as some lines of code, only as some zeros and ones, only some 'on' and 'offs', only as some stuff going on in RAM?

And do you really hold the position that contrary to Searle's claim, this artificial neuron that I've described has consciousness?

I need some clarification here because we've discussed manufactured artificial neurons also.

let's stipulate for clarity:

Simulated = in a program
Artificial = manufactured

-gts






--- On Wed, 12/16/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> From: Stathis Papaioannou <stathisp at gmail.com>
> Subject: Re: [ExI] The symbol grounding problem in strong AI
> To: gordon.swobe at yahoo.com, "ExI chat list" <extropy-chat at lists.extropy.org>
> Date: Wednesday, December 16, 2009, 7:36 PM
> 2009/12/17 Gordon Swobe <gts_2000 at yahoo.com>:
> > --- On Tue, 12/15/09, Stathis Papaioannou <stathisp at gmail.com>
> wrote:
> >
> >> http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html
> > ...
> >
> >> So, Searle allows that the behaviour of a neuron
> could be
> >> copied by a computer program, but that this
> artificial neuron
> >> would lack the essential ingredient for
> consciousness. This claim
> >> can be refuted with a purely analytic argument,
> valid independently
> >> of any empirical fact about the brain. The
> argument consists in
> >> considering what you would experience if part of
> your brain were
> >> replaced with artificial neurons that are
> functionally equivalent
> >> but (for the purpose of the reductio) lacking in
> the the essential
> >> ingredient of consciousness.
> >
> > Glad to see you read that article.
> >
> > I don't understand why you say you refuted anything
> with a purely analytic argument that does not depend on any
> empirical fact, when your argument consists of imagining an
> empirical fact! But that's besides the point...
> 
> The form of the argument is such that it is true if the
> premises are
> true: that is, IF it is possible to simulate the behaviour
> of a neuron
> with a computer program THEN it is also possible to
> simulate
> consciousness.
> 
> Return to your simplified brain X-X-0-0-0-0, where X are
> the
> artificial neurons in the visual cortex and 0 are the
> biological
> neurons in the association, language and motor cortex. The
> X neurons'
> job is to behave in such a way that the 0 neurons can't
> tell that they
> aren't 0 neurons. According to Searle, this masquerade
> should be
> possible. As a result, the subject with the cyborgised
> brain will tell
> me correctly how many fingers I am holding up, declare that
> everything
> looks normal and that he feels just the same as he did
> before the
> operation. This is what *must* happen. It's true in all
> possible
> words, true such that even an omnipotent God couldn't make
> it not
> true. Please explain if you disagree!
> 
> Now, it is logically possible that although the subject
> will behave
> exactly the same as if no change to his brain had been
> made, his
> consciousness would be different. That is, he might be
> blind and not
> notice that he is blind, or he might notice that he is
> blind smiles
> and says everything is just fine while attempting in vain
> to
> communicate his terror. The first possibility would make
> the notion of
> consciousness meaningless, for if nothing else, we
> understand that
> having a perception means that we realise that we have the
> perception.
> The second possibility would mean that the subject is
> thinking without
> his brain, since his brain is constrained to behave
> normally. Both
> these scenarios seem quite implausible, if logically
> possible. Much
> easier to simply say that the subject would be normally
> conscious.
> 
> > It looks like you want to refute Searle's claim that
> although a computer simulation of a brain is possible, such
> a simulation will not have intentionality/semantics. It
> won't on Searle's view have any more semantics than does a
> computer simulation of anything have anything. A simulation
> is, umm, a simulation.
> 
> While a simulation of a thunderstorm is not wet, a
> simulation of a
> brain is conscious. That's the difference between brains
> and
> thunderstorms.
> 
> > I once wrote a gaming application in C++ that
> contained an imaginary character. Because the character
> interacted in complex ways with the human player in spoken
> language (it used voice recognition) I found it handy to
> create an object called "brain" in my code to represent the
> character's thought processes. Had I had the knowledge and
> the time, I could have created a complete computer
> simulation of a real brain.
> >
> > Assume I had done so. Did my character have
> understanding of the words it manipulated? Did the program
> itself have such understanding? In other words, did either
> the character or the program overcome the symbol grounding
> problem?
> >
> > No and No and No. I merely created a computer
> simulation in which an imaginary character with an imaginary
> brain pretended to overcome the symbol grounding problem. I
> did nothing more interesting than does a cartoonist who
> writes cartoons for your local newspaper.
> 
> A complex enough game character probably would be
> conscious. There are
> gradations of consciousness: bacterium, ant, lizard, mouse,
> dog,
> human, superhuman AI.
> 
> 
> -- 
> Stathis Papaioannou
> 


      



More information about the extropy-chat mailing list