[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Mon Dec 21 10:45:47 UTC 2009


A mistake in my previous post and the post I quoted from:

original--
I favour (c). I think (a) is absurd, since if nothing else, having an
experience means you are aware of having the experience. I think (a)
is very unlikely because it would imply that you are doing your
thinking with an immaterial soul, since all your neurons would be
constrained to behave normally.

should have been--
I favour (c). I think (a) is absurd, since if nothing else, having an
experience means you are aware of having the experience. I think (b)
is very unlikely because it would imply that you are doing your
thinking with an immaterial soul, since all your neurons would be
constrained to behave normally.

--

It seems clear that you are convinced that the CRA is correct. The
counterarguments we have presented seem equally obvious to the rest of
us to be clear refutations of the CRA, but you simply restate the CRA
and claim that it remains unrefuted. You also haven't really responded
adequately to the "fading qualia" argument, which purports to prove
that computers of a certain design not only can but *must* have minds.
So, it seems that we are at an impasse. To us it seems that you're
being stubborn, to you it probably seems that we're being stubborn.

-- 
Stathis Papaioannou



More information about the extropy-chat mailing list