[ExI] Symbol Grounding [WAS Re: The digital nature of brains]
Richard Loosemore
rpwl at lightlink.com
Fri Feb 5 15:03:25 UTC 2010
Gordon Swobe wrote:
> --- On Fri, 2/5/10, Dave Sill <sparge at gmail.com> wrote:
>
>>> True or false, Stathis:
>>>
>>> When a program running on a digital computer associates a
>>> sense-datum (say, an image of an object taken with its web-cam)
>>> with the appropriate word-symbol, the system running that program
>>> has now by virtue of that association grounded the word-symbol
>>> and now has understanding of the meaning of that word-symbol.
>> That depends entirely upon the nature of the program.
>
> I see. So then let us say programmer A writes a program that fails
> but that programmer B writes one that succeeds.
>
> What programming tricks did B use such that his program instantiated
> an entity capable of having subjective understanding of words? (And
> where can I find him? I want to hire him.)
[I doubt that you could afford me, but I am open to pleasant surprises.]
As to the earlier question, you are asking about the fundamental nature
of "grounding". Since there is a huge amount of debate and confusion on
the topic, I will save you the trouble of searching the mountain of
prior art and come straight to the answer.
If a system builds a set of symbols that purport to be "about" things in
the world, then the only way to decide if those symbols are properly
grounded is to look at
(a) the mechanisms that build those symbols,
(b) the mechanisms that use those symbols (to do, e.g., thinking),
(c) the mechanisms that adapt or update the symbols over time,
(d) the interconnectedness of the symbols.
If these four aspects of the symbol system are all coherently engaged
with one another, so that the building mechanisms generate symbols that
the deployment mechanisms then use in a way that is consistent, and the
development mechanisms also modify the symbols in a coherent way, and
the connectedness makes sense, then the symbols are grounded.
The key to understanding this last paragraph is that Harnad's contention
was that, as a purely practical matter, this kind of global coherence
can only be achieved if ALL the mechanisms are working together from the
get-go .... which means that the building mechanisms, in particular, are
primarily responsible for creating the symbols (using real world
interaction). So the normal way for symbols to get grounded is for
there to be meaningful "pickup" mechanisms that extract the symbols
autonomously, as a result of the system interacting with the environment.
But notice that pickup of the trivial kind you implied above (the system
just has an object detector attached to its webcam, and a simple bit of
code that forms an association with a word) is not by itself enough to
satisfy the requirements of grounding. Direct pickup from the senses is
a NECESSARY condition for grounding, it is not a SUFFICIENT condition.
Why not? Because if this hypothetical system is going to be
intelligent, then you need a good deal more than just the webcam and a
simple association function - and all that other machinery that is
lurking in the background has to be coherently connected to the rest.
Only if the whole lot is built and allowed to develop in a coherent,
autonomous manner, can the system be said to be grounded.
So, because you only mentioned a couple of mechanisms at the front end
(webcam and association function) you did not give enough information to
tell if the symbols are grounded or not. The correct answer was, then,
"it depends on the program".
The point of symbol grounding is that if the symbols are connected up by
hand, the subtle relationships and mechanism-interactions are almost
certainly not going to be there. But be careful about what is claimed
here: in principle someone *could* be clever enough to hand-wire an
entire intelligent system to get global coherence, and in that case it
could actually be grounded, without the symbols being picked up by the
system itself. But that is such a difficult task that it is for all
practical purposes impossible. Much easier to give the system a set of
mechanisms that include the pickup (symbol-building) mechanisms and let
the system itself find the symbols that matter.
It is worth noting that although Harnad did not say it this way, the
problem is really an example of the complex systems problem (cf my 2007
paper on the subject). Complex-system issues are what make it
practically impossible to hand-wire a grounded system.
You make one final comment, which is about building a system that has a
"subjective" understanding of words.
That goes beyond grounding, to philosophy of mind issues about
subjectivity. A properly grounded system will talk about having
subjective comprehension or awareness of meanings, not because it is
grounded per se, but because it has "analysis" mechanisms that
adjudicate on subjectivity issues, and these mechanisms have systemic
issues that give rise to subjectivity. For more details about that, see
my 2009 paper on Consciousness, which was given at the AGI conference
last year.
Richard Loosemore
More information about the extropy-chat
mailing list