[ExI] The symbol grounding problem in strong AI
Stathis Papaioannou
stathisp at gmail.com
Thu Dec 31 02:07:42 UTC 2009
2009/12/31 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Tue, 12/29/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
> Sorry I fell behind in my postings to you.
>
>> You're inclined to say they would behave in a slightly
>> different way? You may as well say, God will intervene because he's
>> so offended by the idea that computers can think.
>
> Perhaps different from how they might have behaved otherwise, yes, but not unnaturally. Perhaps the person turned left when he might otherwise have turned right. Doesn't prove anything for either of us.
>
>>> Contrary to the rumor going around, reality really
>> does exist. :)
>>
>> Up until this point it seemed there was a chance you might
>> follow the argument to wherever it rationally led you.
>
> Up until what point? My assertion of reality as distinct from simulations of it?
No, I was referring to your assertion that the brain would behave
differently with the artificial neurons in place, with no reason given
for it. The whole point of the argument was to show that the idea that
neuronal function can be separated from consciousness leads to
absurdity, but you could have saved time by explaining at the start
that any argument which shows such a thing must be wrong even if you
can't point out how, because your position on this can't possibly be
wrong.
--
Stathis Papaioannou
More information about the extropy-chat
mailing list