[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Thu Dec 24 20:31:39 UTC 2009


--- On Thu, 12/24/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> It's as if you believe that some physical activity is not
> "purely syntactic", and therefore can potentially give rise to
> mind; but as soon as it is organised in a complex enough way that it can
> be interpreted as implementing a program, this potential is
> destroyed!

Real or hypothetical examples help to illustrate concepts, so let's try to use them when possible. I offer one:

Consider an actual program that takes a simple input asking for information about the day of the week and reports "Thursday". You and I of course understand the meaning of "Thursday". We agree (for the moment) that the program did not understand the meaning because it did only syntactic operations and syntax does not give semantics. Now you ask what about the hardware? You want to know if the hardware (RAM, CPU and so on) that implemented those syntactic operations at the very lowest level (in 1's and 0's or ons and offs) knew the meaning of "Thursday" even while the higher program level did not. Odd question to ask, I think. Unlike the higher program level (which at least appears to have understanding!) at the machine level computers cannot even recognize or spell "Thursday". How then could the machine level understand the meaning of it?

Now you might object and point out that you never actually agreed that the higher program level lacked understanding of "Thursday". Understandable - after all if any understanding exists then we should expect to find it at the higher levels - but now we find ourselves asking the same question about if and how programs can get semantics from syntax.

-gts




      



More information about the extropy-chat mailing list