[ExI] The symbol grounding problem in strong AI

Ben Zaiboc bbenzai at yahoo.com
Sat Dec 19 23:40:20 UTC 2009


> From: Gordon Swobe <gts_2000 at yahoo.com> Persisted:

> --- On Sat, 12/19/09, Ben Zaiboc <bbenzai at yahoo.com>
> wrote:
>  
> > You haven't commented on the other part of my post,
> where I
> > say:
> 
> We need first to get this business about simulations
> straight...
> 
> It seems you don't understand even that a video of your
> father qualifies as a simulation of your father.
> 
> >> If you took a video of your father tying his
> shoelaces
> >> and watched that video, you would watch a
> simulation.
> > 
> > No, I'd be watching an audio-visual recording.? That
> > doesn't contain enough information to call it a
> > simulation.? 
> 
> Sorry, it's a simulation of your father.

Would you please try not to say things like "it seems you don't understand that X", and "Sorry, but it's X", when giving your point of view?  It comes across as arrogant, and I'm sure you don't mean to be.

I do understand that you think that a video of my father qualifies as a simulation of him. I just disagree with this point of view.

But let's not use that word. 

I'm talking about "things-that-reproduce-functional-properties-of-other-things", as distinct from recordings of incidental properties, such as reflectance, colour, etc., and/or abstract representations of these recordings. Can you agree that these are two different things?

A piece of paper with the words "Dad tying his shoelaces" is an abstract representation.  It might be a label for a video.  Neither of those things are going to recreate my dad's shoelace-tying behaviour though.

A model (to avoid the "S" word) that exactly reproduces all the functional properties of the relevant shoelace-tying behaviour, however, is a different thing.  I made such a model in my mind when I was a child, and was successful in reproducing this behaviour.  Still am, in fact, and it works very well.  I'm satisfied that it's not fake or Zombie shoelace-tying, it's the Real Deal.

Now, could you please reply to my other questions?:

1) Do you agree or disagree that Meaning (semantics) is an internally-generated phenomenon in a sufficently complex, and suitably organised, information processing system, with sensory inputs, motor outputs and memory storage?

2) Suppose someone built a brain by taking one cell at a time, and was somehow able to attach them together in exactly the same configuration, with exactly the same synaptic strengths, same myelination, same tight junctions, etc., etc., cell for cell, as an existing biological brain, would the result be a conscious individual, the same as the natural one? (assuming it was put in a suitable body, all connected up properly, etc.).


Thanks,

Ben Zaiboc


      



More information about the extropy-chat mailing list