[ExI] The symbol grounding problem in strong AI

Ben Zaiboc bbenzai at yahoo.com
Sat Dec 19 10:56:11 UTC 2009


Gordon Swobe <gts_2000 at yahoo.com> wrote:

> --- On Fri, 12/18/09, Ben Zaiboc <bbenzai at yahoo.com>
> wrote:
... 
> > OK, so given that, what does 'symbol grounding'
> mean??
> > It means that the meaning of a mental symbol is built
> up
> > from internal representations that derive from this
> 'World
> > according to You'.? There's nothing mysterious or
> > difficult about it, and it doesn't really even deserve
> the
> > description 'problem'.? 
> 
> It's a problem for simulations of people, Ben. Not a
> problem for real people.

I'm talking about simulations.  I'm talking about why, if the simulation is good enough, there is no functional difference.  You start by accepting what I'm saying, which is based entirely on information-processing, then deny that this can apply to a non-biological information processing system, even though you agree it does apply to a biological one.

You haven't commented on the other part of my post, where I say:

------------------------
> > If your character had a brain, and it was a complete
> > simulation of a biological brain, then how could it
> > not have understanding?
>
> Because it's just a program and programs don't have
> semantics.

You keep saying this, but it's not true.

Complex enough programs *can* have semantics.  This should be evident from my description of internal world-building above.  The brain isn't doing anything that a (big) set of interacting data-processing modules in a program, (or more likely a large set of interacting programs) can't also do.  Semantics isn't something that can exist outside of a mind.  Meaning is an internally-generated thing.
-------------------------

This is an important point, which is why I'm going back to it.

Do you agree or disagree that Meaning (semantics) is an internally-generated phenomenon in a sufficently complex, and suitably organised, information processing system, with sensory inputs, motor outputs and memory storage?


> > There seems to be an implication that a simulation is
> > somehow 'inferior' to the 'real thing'. 
> > 
> > I remember simulating my father's method of tying
> shoelaces
> > when I was small.? I'm sure that my shoelace-tying
> now
> > is just as good as his ever was.
> 
> You didn't simulate you father. You imitated him.

I didn't say that.  I said I simulated his shoelace-tying method. Which I did.  If I had simulated *him* (accurately enough) I would *be* him.

And yes, I imitated that part of his behaviour.  I simulated it by imitating him doing it, over several repetitions, because that was sufficient to create a good simulation of that behaviour.
 
> If you took a video of your father tying his shoelaces and
> watched that video, you would watch a simulation.

No, I'd be watching an audio-visual recording.  That doesn't contain enough information to call it a simulation.  If it was a recording that captured his muscle movements, his language patterns, and his belief systems about tying shoelaces, over many repetitions, then it would be a simulation.  If it recorded every detail of his biochemical interactions, then it would be a good simulation.

 
> Is that really your father tying his shoelaces in the
> video, Ben? Or it just pixels on the screen? I.e., just a
> simulation? 

Answered above.

 
> And if you ever watched a video of your father taken while
> he read and understood a newspaper, you watched a simulation
> of your father overcoming the symbol grounding problem. You
> watched a cartoon. Perhaps you confused the cartoon with
> reality, and thought you saw your real father understanding
> something, but in that case you weren't paying attention.

A simulation is not something which captures any old information about a process (such as what it looks like from a distance of x metres in light between 400 - 700 nm wavelength). It's a set of relevant information that captures the functioning of the process at whatever level of detail you need for the purposes of the simulation.  In the case of a book, it's usually the information represented by the words that is relevant, so a digital copy, a scan, or audio recording of those words being read is sufficient.  If you're interested in the physical structure of the book, you'd need to capture a lot more information.

In the case of a mind, you need to capture the information-processing mechanisms, along with the exact data that is being processed or held in storage.  If you have that, you have that mind.

As John K Clark says, it's all about information.  If you don't accept that, you must posit some other 'thing' that is important. I only know of two other 'things': matter and energy.  Do you know of another?

I have a question:
Suppose someone built a brain by taking one cell at a time, and was somehow able to attach them together in exactly the same configuration, with exactly the same synaptic strengths, same myelination, same tight junctions, etc., etc., cell for cell, as an existing biological brain, would the result be a conscious individual, the same as the natural one? (assuming it was put in a suitable body, all connected up properly, etc.).

Ben Zaiboc


      



More information about the extropy-chat mailing list