[ExI] The digital nature of brains (was: digital simulations)

Stathis Papaioannou stathisp at gmail.com
Fri Jan 29 10:41:07 UTC 2010


On 29 January 2010 06:05, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> --- On Wed, 1/27/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>>> When the program finishes, the system will
>>> have made every possible meaningful association of W to
>>> other words. Will it then have conscious understanding of the
>>> meaning of W? No. The human operator will understand W but
>>> s/h systems have no means of attaching meanings to
>>> symbols. The system followed purely syntactic rules to make all
>>> those hundreds of millions of associations without ever
>>> understanding them. It cannot get semantics from syntax.
>
> I name my dictionary-word-association-program s/h system above "DWAP".
>
>> I'm afraid I don't agree. The man in the room doesn't
>> understand the symbols, the matter in the computer doesn't understand
>> the symbols, but the process of computing *does* understand the
>> symbols.
>
> You lost me there. Either DWAP has conscious understanding of W (in which case it 'has semantics'), or else DWAP does not have conscious understanding of W.

It depends on whether DWAP is actually capable of natural language.
It's easy to write a dictionary, but it isn't easy to write a program
which passes the TT, which is why it hasn't been done. The brain does
a lot of things subconsciously, arguably most things. You are not
aware of the processing going on in your brain when you are having a
conversation: you are only conscious of "words", "sentences", "ideas"
which are the high level result of very complex low level switching
type behaviour by neurons. Your error is to look at the low level
behaviour of a computer and say that you don't see any meaning there,
but ignore the fact that the same is true of the low level behaviour
of the brain. So: if DWAP is capable of passing the TT then DWAP
probably has conscious understanding, even if the components of DWAP
manipulating the symbols understand no more than a neuron does.

> First you agreed with me that DWAP does not have semantics, and you also made the excellent observation that a human who performed the same syntactic operations on English symbols would also not obtain conscious understanding of the symbols merely by virtue of having performed those operations. It would take something else, you said.
>
> But now it seems that you've reneged. Now you want to say that DWAP has semantics? I think you had it right the first time.
>
> So let me ask you again in clear terms:
>
> Does DWAP have conscious understanding of W? Or not?
>
> And would a human non-English-speaker obtain conscious understanding of W from performing the same syntactic operations as did DWAP? Or not?

A human non-English-speaker would be unable to perform the operations
of a DWAP capable of holding a conversation, but if he could, he would
have no more understanding than the neurons have of what they are
doing. However, he would be implementing an algorithm that has
understanding, just as the dumb neurons (certainly much dumber than
even a very dumb person) are implementing an algorithm that has
understanding. Do you acknowledge this basic point about a system,
that understanding emerges from the interaction of its components,
even though the components individually or even collectively lack it?


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list