[ExI] Meaningless Symbols.

Eric Messick eric at m056832107.syzygy.com
Sun Jan 17 07:00:55 UTC 2010

Gordon writes:
>--- On Sat, 1/16/10, Eric Messick <eric at m056832107.syzygy.com> wrote:
>> Would you say that "description != thing" is the reason
>> computer systems cannot replicate the capability of brains to
>> understand?
>In a general sense, yes. I think S/H systems can simulate
> understanding as in weak AI but that they cannot have conscious
> understanding as in strong AI.

Putting these together, what we're discussing here is the truth value
of the statement: "simulated understanding != real understanding", and
what you might consider a corollary: "simulated consciousness != real

You seem to take these statements as axioms.

I think that both consciousness and understanding are computational
processes.  Turing showed that beyond a very simple set of
capabilities, all computational processes can be considered equivalent
(modulo obvious performance differences).  If both these things are
true, then simulated consciousness must equal real consciousness, in
violation of your axiom.

Unless you want to argue with Turing, we can construct the following

  If simulated consciousness is not equivalent to real consciousness,
  then consciousness is not a computational process.

Do you agree that this is a true statement?

Perhaps instead, you've taken the consequent part as your axiom, and
derived your thoughts about simulated equivalence.

In any case, it appears we have someone taking A as an axiom talking
with someone taking ~A as an axiom.  Not easy to resolve.

>> Have you studied the molecular pathways that mediate these
>> changes?
>I've already assumed the programmer knows everything possible about
> the biology of experience.

That sounds like a "no" to me, which is ok, except that you're trying
to argue that something you haven't studied can't be adequately

>> I think you have a very different meaning for the word
>> "semantics"
>In the broadest sense I mean the having of conscious experience of
> any kind, but we concern ourselves here mainly with the kinds of
> mental contents most closely associated with intelligence, e.g., the
> experience of understanding words and numbers.

The word "experience" has two meanings which could be operating here:

 1) we learn semantics by experience


 2) semantics is the experience (or quale) of understanding.

I think semantics are just facts of a particular type.  Things that
seem like that thing over there are called "balls".  That's a semantic
relationship, which helps to establish the meaning of the word "ball".
The semantic fact remains even if I'm not experiencing it at the

I took a quick peek at the Wikipedia entry for Semantics, and there is
very little there to support your usage.  The closest I could find was

     In Chomskian linguistics there was no mechanism for the learning
     of semantic relations, and the nativist view considered all
     semantic notions as inborn. Thus, even novel concepts were
     proposed to have been dormant in some sense. This view was also
     thought unable to address many issues such as metaphor or
     associative meanings, and semantic change, where meanings within
     a linguistic community change over time, and qualia or subjective

Now, I wouldn't count Wikipedia as authoritative, but I'm going to
presume your usage is non-standard unless you can show otherwise.

This is probably the source of the "can syntax produce semantics"

It sounds like what you really mean by that is:

  syntax cannot produce the quale of understanding

which is only a step or two away from the axiom I speculate on you
holding above.  Hence the speculation that you're holding this as an
axiom too.

Are you ready to take the blue pill and try out living in a world
where ~A is true?

Non-euclidean geometry is fun!


More information about the extropy-chat mailing list