[ExI] Semiotics and Computability (was: The digital nature of brains)

Spencer Campbell lacertilian at gmail.com
Sun Jan 31 03:07:58 UTC 2010


Stathis Papaioannou <stathisp at gmail.com>:
>Gordon Swobe <gts_2000 at yahoo.com>:
>> A3: syntax is neither constitutive of nor sufficient for semantics.
>>
>> It's because of A3 that the man in the room cannot understand the symbols. I started the robot thread to discuss the addition of sense data on the mistaken belief that you had finally recognized the truth of that axiom. Do you recognize it now?
>
> No, I assert the very opposite: that meaning is nothing but the
> association of one input with another input. You posit that there is a
> magical extra step, which is completely useless and undetectable by
> any means.

Crap! Now I'm doing it too. This whole discussion is just an absurdly
complex feedback loop, neither positive nor negative. It will never
get better and it will never end. Yet the subject matter is
interesting, and I am helpless to resist.

First, yes, I agree with Stathis's assertion that association of one
input with another input, or with another output, or, generally, of
one datum with another datum, is the very definition of meaning.
Literally, "A means B". This is mathematically equivalent to, "A
equals B". Smoke equals fire, therefore, if smoke is true or fire is
true then both are true. This is very bad reasoning, and very human.
Nevertheless, we can say that there is a semantic association between
smoke and fire.

Of course the definitions of semantics and syntax seem to have become
deranged somewhere along the lines, so someone with a different
interpretation of their meaning than I have may very well leap at the
chance to rub my face in it here. This is a risk I am willing to take.

So!

To see a computer's idea of semantics one might look at file formats.
An image can be represented in BMP or PNG format, but in either case
it is the same image; both files have the same meaning, though the
manner in which that meaning is represented differs radically, just as
10/12 differs from 5 * 6^-1.

Another source might be desktop shortcuts. You double-click the icon
for the terrible browser of your choice, and your computer takes this
to mean instead that you are double-clicking an EXE file in a
completely different place. Note that I could very naturally insert
the word "mean" there, implying a semantic association.

Neither of these are nearly so human a use of semantics, because the
relationship in each case is literal, not causal. However, it is still
semantics: an association between two pieces of information.

Gordon has no beef with a machine that produces intelligent behavior
through semantic processes, only with one that produces the same
behavior through syntax alone.

At this point, though, his argument becomes rather hazy to me. How can
anything even resembling human intelligence be produced without
semantic association?

A common feature in Searle's thought experiments, and in Gordon's by
extension, is that there is a very poor description of the exact
process by which a conversational computer determines how to respond
to any given statement. This is necessary to some extent, because if
anyone could give a precise description of the program that passes the
Turing test, well, they could just write it.

In any case, there's just no excuse to describe that program with
rules like: if I hear "What is a pig?" then I will say "A farm
animal". Sure, some people give that response to that question some of
the time. But if you ask it twice in a row to the same person, you
will get dramatically different answers each time. It's a gross
oversimplification, but I'm forced to admit that it is technically
valid if one views it only as what will happen, from a very high-level
perspective, if "What is a pig?" is the very next thing the Chinese
Room is asked. A whole new lineup of rules like that would be have to
be generated after each response. Not a very practical solution.
Effective, but not efficient.

However, it seems to me that even if we had the brute processing power
to implement a system like that while keeping it realistically
quick-witted, it would still be impossible to generate that rule
without the program containing at least one semantic fact, namely,
"pig = farm animal".

The only part syntactical rules play in this scenario is to insert the
word "a" at the beginning of the sentence. Syntax is concerned only
with grammatical correctness. Using syntax alone, one might imagine
that the answer would be "a noun": the place at which "pig" occurs in
the sentence implies that the word must be a noun, and this is as
close as a syntactical rule can come to showing similarity between two
symbols. If the grammar in question doesn't explicitly provide
categories for symbols, as in English, then not even this can be done,
and a meaningful syntax-based response is completely impossible.

I started on this message to point out that Stathis had completely
missed the point of A3, but sure enough I ended up picking on Searle
(and Gordon) as well.

In the end, I would like to make the claim: syntax implies semantics,
and semantics implies syntax. One cannot find either in isolation,
except in the realm of one's imagination. Like so many other divisions
imposed between natural (that is, non-imaginary) phenomena, this one
is valid but false.



More information about the extropy-chat mailing list