[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Tue Dec 29 15:12:12 UTC 2009


--- On Tue, 12/29/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> Would the artificial neuron behave like a natural neuron or would it 
> not? 

In your partial replacement scenario, we would not look at it and say "natural neurons never act in such a way" so in this respect they work just like natural neurons. But still I'm inclined to say they would behave in a slightly different way than those naturals that would have existed contrafactually, at least momentarily during the process. But then on the other hand who's to say what could have been?

The point is that the subject begins to lose his first person perspective. If we freeze the picture at that very moment then admittedly I find myself left with a conundrum, one not unlike, say, Zeno's paradox. I resolve it by moving forward or backward in time. When we move forward to the end and complete the experiment, simulating the entire person and his environs, we find ourselves creating only objects in computer code, mere blueprints of real things. If we move backward to the beginning then we find ourselves with the original intentional person in the real world. In between the beginning and the end we can play all sorts of fun and interesting games that challenge and stretch our imaginations, but that's all they seem to me.

Contrary to the rumor going around, reality really does exist. :)

-gts



      



More information about the extropy-chat mailing list