[ExI] The symbol grounding problem in strong AI

Ben Zaiboc bbenzai at yahoo.com
Fri Dec 18 11:09:28 UTC 2009


Gordon Swobe <gts_2000 at yahoo.com> wrote:

> --- On Thu, 12/17/09, Ben Zaiboc <bbenzai at yahoo.com>
> wrote:
...
 
> > It's "Yes", "Yes", and "What symbol grounding
> problem?"
> 
> You'll understand the symbol grounding problem if and when
> you understand my last sentence, that I did nothing more
> interesting than does a cartoonist.

LOL.  I didn't mean that I don't understand what the 'symbol grounding problem' is, I meant that there is no such problem.  This seems to be a pretty fundamental sticking point, so I'll explain my thinking.

We do not know what 'reality' is.  There is nothing in our brains that can directly comprehend reality (if that even means anything).  What we do is collect sensory data via our eyes, ears, etc., and sift it, sort it, combine it, distort it with preconceptions and past memories, and create 'sensory maps' which are then used to feed the more abstract parts of our minds, to create 'the World according to You'.  

We use this constantly changing internal 'world representation' to make models about our environment, other people, imaginary things, etc., and most of the time it works well enough that we habitually think of this as 'reality'.  *But it's not*.  The 'real reality' is forever unknowable.

OK, so given that, what does 'symbol grounding' mean?  It means that the meaning of a mental symbol is built up from internal representations that derive from this 'World according to You'.  There's nothing mysterious or difficult about it, and it doesn't really even deserve the description 'problem'.  There is no problem.  There is just another set of relationships in the mind between memories, sensory data, and the models and abstractions we build from them.  The 'chairness' of a chair has absolutely nothing to do with some platonic realm that we need to have some mystical access to.  It's something we create in our own minds from a lot of complex and not currently understood, but inhererently understandable and mechanistic processes in our brains.  The symbol of 'sitting' is grounded in memories and sensory data from thousands of experiences of putting our bodies in a particular set of positions and experiencing a variety of sensations that result. 
 That's all there is to it.  Nothing difficult at all, even though it is very complex. It's certainly not *mysterious*.

>  
> > If your character had a brain, and it was a complete
> > simulation of a biological brain, then how could it
> not have
> > understanding? 
> 
> Because it's just a program and programs don't have
> semantics.

You keep saying this, but it's not true.

Complex enough programs *can* have semantics.  This should be evident from my description of internal world-building above.  The brain isn't doing anything that a (big) set of interacting data-processing modules in a program, (or more likely a large set of interacting programs) can't also do.  Semantics isn't something that can exist outside of a mind.  Meaning is an internally-generated thing.


> 
> > This: "A simulation is, umm, a simulation." is the
> > giveaway, I think.? 
> 
> The point is that computer simulations are just that:
> simulations. 

There seems to be an implication that a simulation is somehow 'inferior' to the 'real thing'. 

I remember simulating my father's method of tying shoelaces when I was small.  I'm sure that my shoelace-tying now is just as good as his ever was.

I've heard the idea that a computer model of a thunderstorm will never be wet.  But that's not actually true. It's a confusion between levels.
A computer simulation of a thunderstorm, if accurate enough, will contain the same sensory effects on a person who is simulated using the same methods.  In other words, it's wet on it's own level.  Anything else would be absurd.

> 
> > Correct me if I'm wrong, but it seems that you think
> there
> > is some magical functional property of a physical
> object
> > that a model of it, *no matter how detailed*, cannot
> > possess?
> 
> I don't claim that physical objects do anything magical. I
> do however claim that computer simulations of physical
> objects do not. 
> 

Of course not.  They don't need to, as physical objects don't.  

Simulations of physical processes that replicate every functional property of them will necessarily produce every behaviour of the original processes.

Whether or not we can accurately create such simulations is another matter, but that's just a problem of getting better at it, not a fundamental theoretical roadblock.

Ben Zaiboc


      



More information about the extropy-chat mailing list