[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Tue Dec 29 15:34:38 UTC 2009


2009/12/30 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Tue, 12/29/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> Would the artificial neuron behave like a natural neuron or would it
>> not?
>
> In your partial replacement scenario, we would not look at it and say "natural neurons never act in such a way" so in this respect they work just like natural neurons. But still I'm inclined to say they would behave in a slightly different way than those naturals that would have existed contrafactually, at least momentarily during the process. But then on the other hand who's to say what could have been?

You're inclined to say they would behave in a slightly different way?
You may as well say, God will intervene because he's so offended by
the idea that computers can think.

> The point is that the subject begins to lose his first person perspective. If we freeze the picture at that very moment then admittedly I find myself left with a conundrum, one not unlike, say, Zeno's paradox. I resolve it by moving forward or backward in time. When we move forward to the end and complete the experiment, simulating the entire person and his environs, we find ourselves creating only objects in computer code, mere blueprints of real things. If we move backward to the beginning then we find ourselves with the original intentional person in the real world. In between the beginning and the end we can play all sorts of fun and interesting games that challenge and stretch our imaginations, but that's all they seem to me.
>
> Contrary to the rumor going around, reality really does exist. :)

Up until this point it seemed there was a chance you might follow the
argument to wherever it rationally led you.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list