[ExI] meaning & symbols

Gordon Swobe gts_2000 at yahoo.com
Wed Feb 3 02:32:09 UTC 2010

--- On Tue, 2/2/10, Spencer Campbell <lacertilian at gmail.com> wrote

> In the second paragraph I almost jumped on you again for
> misusing the concept of abstraction, but then I noticed you said "and
> about other" rather than "and other". You weren't saying that sense data
> are abstractions, if I understand correctly. 


> When we get to the third paragraph, however, it sounds as
> if you  believe that manking discovered symbols rather than
> invented them.

No. Not sure why you would say that. I certainly do not believe we discover symbols. We create them.

> Here's the thing: the very idea of a symbol is, in and of
> itself, an abstraction. I suspect it's possible to form a 
> coherent model of the mind (by today's standards) without ever 
> mentioning symbols or anything like them. It may not be a particularly 
> elegant model, but it would work as well as any other.

No matter what you may choose to call them, people do speak and understand word-symbols. The human mind thus "has semantics" and any coherent model of it must explain that plain fact. 

The computationalist model fails to explain that plain fact. On that model minds do no more than run programs and programs do no more than manipulate symbols according to rules of syntax. Nothing in the model explains how the mind can have conscious understand the symbols it manipulates. To make the model coherent, its proponents must introduce a homunculous: an observer/user of the supposed brain/computer who sees and understands the meanings of the symbols. But the homunculous fallacy proves fatal to the theory: How does the homunculous understand the symbols if not by some means other than computation? And if that's so then why did we say the mind exists as a computer in the first place? 

> You're correct in saying that sense symbols do not exist,
> but only insofar as there aren't any symbols which DO exist.

Hmm, I count 21 word-symbols in that sentence of yours. 

> All I meant by "intrinsic" meaning was that some symbols in
> the field of all available within a given Z are meaningful
> irrespective of any other symbols. Eric explains that this is so 
> because they are invoked directly by incoming sensory data: I see 
> a dog, I think a dog symbol. 

Yes, but when I look inside your head I see nothing even remotely resembling a digital computer. Instead I see a marvelous product of biological evolution.



More information about the extropy-chat mailing list