[ExI] The symbol grounding problem in strong AI
Stathis Papaioannou
stathisp at gmail.com
Tue Dec 29 02:40:56 UTC 2009
2009/12/29 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Mon, 12/28/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> Before proceeding, I would like you to say what you think
>> you would experience if some of your neurons were replaced with
>> artificial neurons that behave externally like biological neurons but,
>> being tainted with programming, lack understanding.
>
> Again it looks as if you've asked me to predict the outcome of a logical impossibility.
>
> Similar to your last cyborg-like experiment, you have tampered with or completely short-circuited the feedback loop between the subject's behavior, including the behavior of his neurons, and his understanding. And so contrary to the wording in your question, the program-driven neurons will not "behave externally like biological neurons".
>
> What will "I" experience in the midst of this highly dubious and seemingly impossible state of affairs, you ask? I dare not even guess. Unclear that "I" should even exist, and then only because you included the word "some".
You claim both that the physics of neurons is computable AND that it
is impossible to make program-driven neurons that behave like natural
neurons, which is a contradiction. Even Searle agrees that you can
make artificial neurons that behave like natural neurons, in the
passage I quoted earlier: that's what weak AI is!
--
Stathis Papaioannou
More information about the extropy-chat
mailing list