[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Sat Jan 2 04:12:44 UTC 2010


2010/1/2 Gordon Swobe <gts_2000 at yahoo.com>:

> You didn't show it to me. If you showed me anything, you showed me that an artificial brain that behaves like a real brain but does not have the material substance of a real brain will result in a mindless cartoon character that merely acts like he has intentionality, i.e., weak AI.
>
> You'll find it easier to see if you replace his entire brain with a formal programmatic description of it. Programs merely describe the real or supposed things that they're about. They're the depiction of food on a lunch menu, not the food itself.

The reason I insist on the partial replacement experiment is that it
shows the absurdity of your position by forcing you to consider what
effect functionally identical but mindless components would have on
the rest of the brain. But it seems you are so sure your position is
correct that you consider any argument purporting to show otherwise as
wrong by definition, even if you can't point out where the problem is.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list