[ExI] Meaningless Symbols.

Gordon Swobe gts_2000 at yahoo.com
Sat Jan 16 20:10:57 UTC 2010


--- On Sat, 1/16/10, Eric Messick <eric at m056832107.syzygy.com> wrote:

> I'm not at all sure that's what Gordon thinks, although it
> is difficult to tell for sure.

In a nutshell: the human brain/mind has capabilities that software/hardware systems do not and cannot have. Ergo, we cannot duplicate brains on s/h systems; strong AI is false.
 
> In discussing the partial replacement thought experiment he
> says that the surgeon will replace the initial set of neurons and
> find that they don't produce the desired behavior in the patient, so he
> has to go back and tweak things again.

I believe experience affects behavior including neuronal behavior. This means the surgeon/programmer of programmatic neurons in the experiment faces an exceedingly difficult if not impossible challenge even in creating weak AI in his patient. He cannot anticipate what kinds of experiences his patient will have after leaving the hospital, but he must program his patient not only to respond appropriately to those experiences but also to change his subsequent behavior appropriately. 

> Everyone else seems to think Gordon means that tweaking is
> in that programming, and that eventually the surgeon manages to get
> the program right.  He's actually said that the surgeon
> will need to go in and replace more and more of the patient's brain 
> in order to get the patient to pass the Turing test, and that the 
> extensive removal of biological neurons is what turns the patient into a
> zombie.

The patient arrived at the hospital already a near zombie, suffering from a complete receptive aphasia -- a complete inability to understand words -- due to damage to Wernicke's area in his brain. I consider it unclear whether he can survive the operation without losing what little sense of self he might have left. Unclear that he even has a sense of self before the operation. Again he presents with no understanding of words, presumably not even the words "I" and "me".

> Since Gordon also claims that neurons are computable, this
> seems to me to be a contradiction in his position.

I allow that most everything in the world including the brain lends itself to computation. But this fact means nothing. A computational description of a thing amounts to nothing more than a description of the thing, and descriptions of things do not equal the things they describe.

> I'm going to also guess that Gordon thinks the thing we 
> don't currently know how to do in making a programmatic neuron 
> is to derive semantics from syntax.  I think I remember him saying
> he believes this to eventually be possible, but that we currently have
> no clue how.

No, I deny that formal programs can have or cause semantics.

> So, Gordon seems to think that consciousness is apparent in
> behavior,

Not sure what you mean by apparent, but I do not believe we can prove an entity has consciousness from its behavior. It takes a philosophical argument.

> Gordon:  did I represent your position accurately
> here?

See above. Thanks for joining in by the way. Lots of messages in this thread and I don't always have time to answer even those addressed to me.

-gts


      



More information about the extropy-chat mailing list