[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Sat Dec 19 16:12:58 UTC 2009


--- On Sat, 12/19/09, Ben Zaiboc <bbenzai at yahoo.com> wrote:
 
> You haven't commented on the other part of my post, where I
> say:

We need first to get this business about simulations straight...

It seems you don't understand even that a video of your father qualifies as a simulation of your father.

>> If you took a video of your father tying his shoelaces
>> and watched that video, you would watch a simulation.
> 
> No, I'd be watching an audio-visual recording.  That
> doesn't contain enough information to call it a
> simulation.  

Sorry, it's a simulation of your father.

> If it was a recording that captured his
> muscle movements, his language patterns, and his belief
> systems about tying shoelaces, over many repetitions, then
> it would be a simulation.  If it recorded every detail
> of his biochemical interactions, then it would be a good
> simulation.

Now you've created a much better simulation, and good for you. However your original video also counts as a simulation. It makes no difference how many details you include in your simulation; it will never become more than a simulation. Even if you record computer simulations of every atom in your father's body, you will still have recorded only a simulation of your father.

When you later observe that recorded computer simulation of your father, you will watch a computer simulation of your father. It's a simulation, a cartoon. If you think you see somebody real in the cartoon who ties real shoelaces and who really understands words then you've simply deceived yourself. 

-gts



      



More information about the extropy-chat mailing list