[ExI] The symbol grounding problem in strong AI
stathisp at gmail.com
Mon Jan 4 12:05:41 UTC 2010
2010/1/4 Gordon Swobe <gts_2000 at yahoo.com>:
>> I actually believe that semantics can *only* come from
>> syntax, but if it can't, your fallback is that semantics
>> comes from the physical activity inside brains.
> Something along those lines, yes. But we can't paste form onto substance and expect intrinsic intentionality, and that's all formal programs do to hardware substance. We might just as well write a letter and expect the letter to understand the words.
Still, you have agreed that while programming is not sufficient for
intelligence, it cannot prevent intelligence. So although you may have
a strong hunch that p-neurons aren't c-neurons, you can't claim this
with the force of logical necessity. And that's what would be required
in order to justify the weirdness that the partial brain replacement
experiment I have been describing would entail.
More information about the extropy-chat