[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Sun Dec 20 00:57:02 UTC 2009


--- On Sat, 12/19/09, Ben Zaiboc <bbenzai at yahoo.com> wrote:

> Now, could you please reply to my other questions?:
> 
> 1) Do you agree or disagree that Meaning (semantics) is an
> internally-generated phenomenon in a sufficently complex,
> and suitably organised, information processing system, with
> sensory inputs, motor outputs and memory storage?

Not if it depends on formal programs to generate the semantics. 

I've already explained how I once created a computer simulation of a brain by creating an object in my code to represent one. The brain object generated seemingly meaningful answers to real human voice commands, but the semantics came entirely from the human. 

The simulation had no idea what it meant except in the mind of the human player, and then only if the human took a voluntary vacation from reality.

My primitive simulation took only a couple of hundred lines of code, but I have no reason to think it would have worked differently with a couple of hundred billion lines of code.

> 2) Suppose someone built a brain by taking one cell at a
> time, and was somehow able to attach them together in
> exactly the same configuration, with exactly the same
> synaptic strengths, same myelination, same tight junctions,
> etc., etc., cell for cell, as an existing biological brain,
> would the result be a conscious individual, the same as the
> natural one? (assuming it was put in a suitable body, all
> connected up properly, etc.).

If you transplanted those neurons very carefully from one brain to create the other then probably so. If you manufactured them and if programs drive them I don't think so. See my dialogue with Stathis.

-gts


      



More information about the extropy-chat mailing list