[ExI] The digital nature of brains (was: digital simulations)

Gordon Swobe gts_2000 at yahoo.com
Wed Jan 27 13:16:41 UTC 2010


--- On Tue, 1/26/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

This exchange looks to me like a breakthrough, Stathis.

I wrote:

>> To the argument that "association equals symbol
>> grounding" as has been bandied about...
>>
>> Modern word processors can reference words in digital
>> dictionaries. Let us say that I write a program that does
>> only that, and that it does this automagically at ultra-fast
>> speed on a powerful Cray software/hardware system with
>> massive or even infinite memory. When the human operator
>> types in a word, the s/h system first assigns that word to a
>> variable, call it W, and then searches for the word in a
>> complete dictionary of the English language. It assigns the
>> dictionary definition of W to another variable, call it D,
>> and then makes the association W = D.
>>
>> The system then treats every word in D as it did for
>> for the original W, looking up the definition of every word
>> in the definition of W. It then does the same for those
>> definitions, and so on and so on through an indefinite
>> number of branches until it nearly or completely exhausts
>> the complete English dictionary.
>>
>> When the program finishes, the system will have made
>> every possible meaningful association of W to other words.
>> Will it then have conscious understanding the meaning of W?
>> No. The human operator will understand W, but s/h systems
>> have no means of attaching meanings to symbols. The system
>> followed purely syntactic rules to make all those hundreds
>> of millions of associations without ever understanding them.
>> It cannot get semantics from syntax.

You replied:

> But if you put a human in place of the computer doing the
> same thing he won't understand the symbols either, no matter how
> intelligent he is. 

Absolutely right!! 

My example above works like the Chinese room, but with the languages reversed. We can imagine a Chinese man operating the word-association program inside the Cray computer. He will manipulate the English symbols according to the syntactic rules specified by the program, and he will do so in ways that appear meaningful to an English-speaking human operator, but he will never come to understand the English symbols. He cannot get semantics from syntax. 

This represents progress, because you argued just a few days ago that perhaps people and also computers really do get semantics from syntax. Now I think you that they do not.

It looks like you agree with the third premise:

A3: Syntax is neither constitutive of nor sufficient for semantics.' 

We might also add this corollary:

'A3a: The mere syntactic association of symbols is not sufficient for semantics'

These truths are pretty easy to see, and now you see them.

> The symbols need to be associated with some environmental input,
> and then they have "meaning". 

Environmental input matters, no question about that. I'll address it in my next message. For now I would like you to tell me if you agree with the above. 

-gts




      



More information about the extropy-chat mailing list