[ExI] Meaningless Symbols

Samantha Atkins sjatkins at mac.com
Mon Jan 11 09:23:12 UTC 2010


On Jan 9, 2010, at 10:54 AM, Gordon Swobe wrote:

> --- On Sat, 1/9/10, Ben Zaiboc <bbenzai at yahoo.com> wrote:
> 
>> In 'Are We Spiritual Machines?: Ray
>> Kurzweil vs. the Critics of Strong AI', John Searle says:
>> 
>> "Here is what happened inside Deep Blue. The computer has a
>> bunch of meaningless symbols that the programmers use to
>> represent the positions of the pieces on the board. It has a
>> bunch of equally meaningless symbols that the programmers
>> use to represent options for possible moves."
>> 
>> 
>> This is a perfect example of why I can't take the guy
>> seriously.  He talks about 'meaningless' symbols, then
>> goes on to describe what those symbols mean! He is
>> *explicitly* stating that two sets of symbols represent
>> positions on a chess board, and options for possible moves,
>> respectively, while at the same time claiming that these
>> symbols are meaningless.  wtf?  
> 
> Human operators ascribe meanings to the symbols their computers manipulate. Sometimes humans forget this and pretend that the computers actually understand the meanings. 

We manipulate symbols ourselves that have no meaning except the one we assign.  Worse, what we assign to most of our symbols is actually very murky, approximate and sloppy.  Worse still the largest part of our mental processes are sub-symbolic, utterly unconscious output of a very lossy, buggy, biological computer programmed in large part just well enough to survive its environment and reproduce.  

> 
> It's an understandable mistake; after all it sure *looks* like computers understand the meanings. But then that's what programmers do for a living: we program dumb machines to make them look like they have understanding.
> 

Human programmers are the primary reasons machines are not much smarter.  The conscious explicit reasoning part of our brains is used for programming.  It is notoriously weak, limited and only a small recently added experimental extension slapped on top of the original architecture.   We can't explicitly program beyond the rather simplistic level we can debug.   It is amazing our machines are as smart as they are with such constraints.


> The question of strong AI is: "How can we make computers actually understand the meanings and not merely appear to understand the meanings?"
> 

How can you prove that you understand the meanings?


> And Searle's answer is: "It won't happen from running formal syntactical programs on hardware as we do today, because computers and their programs cannot and will never get semantics from syntax.

But that is just semantics!  Sorry, couldn't resist.  :)

- samantha




More information about the extropy-chat mailing list