[ExI] Meaningless Symbols.

Eric Messick eric at m056832107.syzygy.com
Sat Jan 16 21:40:13 UTC 2010


Gordon writes:
>In a nutshell: the human brain/mind has capabilities that
> software/hardware systems do not and cannot have. Ergo, we cannot
> duplicate brains on s/h systems; strong AI is false.

and, later:

>I allow that most everything in the world including the brain lends
> itself to computation. But this fact means nothing. A computational
> description of a thing amounts to nothing more than a description of
> the thing, and descriptions of things do not equal the things they
> describe.

Would you say that "description != thing" is the reason computer
systems cannot replicate the capability of brains to understand?

In other words, if we could make a simulation of water that actually
included wetness, would we also be able to write a program that
was conscious of that wetness?

>I believe experience affects behavior including neuronal
> behavior. This means the surgeon/programmer of programmatic neurons
> in the experiment faces an exceedingly difficult if not impossible
> challenge even in creating weak AI in his patient. He cannot
> anticipate what kinds of experiences his patient will have after
> leaving the hospital, but he must program his patient not only to
> respond appropriately to those experiences but also to change his
> subsequent behavior appropriately.

One of the primary behaviors of neurons is to change their response to
signals over time.  The basic way this happens is well characterized.
Any programmatic neuron would be coded to change in the same manner.
The mechanism is not all that complicated.  It is also the fundamental
mechanism behind learning, and the way in which experience alters
future behavior.

Have you studied the molecular pathways that mediate these changes?

Do you have any reason to think this type of change would be difficult
to program?

>No, I deny that formal programs can have or cause semantics.

I think you have a very different meaning for the word "semantics"
than most of the rest of us engaging in this discussion.  I suspect
that this difference also stems from "description != thing".

>> So, Gordon seems to think that consciousness is apparent in
>> behavior,
>
>Not sure what you mean by apparent, but I do not believe we can prove
> an entity has consciousness from its behavior. It takes a
> philosophical argument.

By apparent, I mean that an individual who is capable of consciousness
will behave differently from one who is incapable.  That difference in
behavior is something that evolution could select for.  Essentially,
consciousness makes you more fit in some way.  That doesn't
necessarily mean that we can deduce the existence of consciousness
based on any specific trait.

-eric



More information about the extropy-chat mailing list