[ExI] How to ground a symbol

Ben Zaiboc bbenzai at yahoo.com
Sun Jan 31 11:44:14 UTC 2010


Gordon wrote:

"We cannot first understand the meaning of a symbol from looking only at its form"

Obviously not.

And:

"We must learn the meaning in some other way, and attach that meaning to the form, such that we can subsequently recognize that form and know the meaning"


Indeed.
There's only one way I can think of to do this, and that's through association with sensory data (or more accurately, association with a set of abstracted commonalities in a set of mind-states produced in conjunction with the reception of sensory data, but that's a bit of a mouthful).

The word "Red" written in a dictionary (or as a piece of data in a computer memory, or a pattern of neuron firings in some part of a brain) is meaningless on it's own.  Of course.

A system that associates the word "Red" with the various states produced within itself whenever its sensory apparatus recieves light in a particular range of wavelengths, or when it recreates some of these states from previously-stored data (remembering), thereby assigns a meaning to the word.  "Red" becomes a shorthand for an abstracted set of common elements in these states. This is the training phase, when it extracts commonalities from a large set of examples. Artificial neural nets and learning algorithms need to go through this phase, and so do babies.

As far as I can see, this is the only meaning that "symbol grounding" can possibly have, and any system of sufficient complexity, with sensory inputs, memory storage, pattern matching methods and training data, can do it.  

It makes no difference whether that system is biological, electromechanical, digital, analogue, stones in grooves in a vast desert, or charged particles in a system of magnetic fields.  It's the processing of sensory information that matters.

In future, whenever the system sees a rose, it will know whether it's a red rose or not, because there'll be a part of its internal state that matches the symbol "Red".  If it's running the correct kind of pattern-matching algorithms, it will recognise this instantly, and know that the rose is "Red".

This also explains why we can use the same word for slightly different things.  One system can be exposed to lots of cyan things during its development, and taught to use the word "Blue", another may be exposed to lots of spectrum blue things, and associate the same word.  They will both use the same word for the same general end of the spectrum, but may later argue whether that girl's dress is actually "Blue" or "Green".  This happens with humans all the time, and I fully expect it to happen with AIs.

Ben Zaiboc


      



More information about the extropy-chat mailing list