[ExI] The symbol grounding problem in strong AI.

John Clark jonkc at bellsouth.net
Sat Jan 2 16:21:20 UTC 2010


On Jan 2, 2010, Gordon Swobe wrote:

>> I see a metaphysical problem only when people assert that the mind exists as some sort of abstract entity (programmatic, algorithmic, whatever) distinct from the brain

Fast is abstract, I can't hold fast in my hands, and fast is distinct from a racing car just as mind is not the same as a brain. What's all spooky and metaphysical about that?

>> Intentional means calculable, and calculable sounds to me to be something
>> programs should be rather good at. 
> 
> Good at simulating intentionality, yes.

As long as the machine "simulates" intentionality with the same fidelity that it can "simulate" arithmetic or music I don't see there being the slightest problem. And if intentional means calculable or being directed to some object or goal then I can see absolutely no reason a machine couldn't do that, in fact they have been doing exactly that for years. I can only conclude that in Gordon-Speak the word "simulate" means done by a machine and it means precisely nothing more.
> 
> My point is that simulations only, ahem, simulate the things they simulate.

You have only one point, machines do simulations. I agree.

> I don't see this as an especially difficult concept to fathom, and it has nothing to do with Darwin!

OF COURSE IT HAS SOMETHING TO DO WITH DARWIN! But why bother, I've explained exactly why its all about Darwin about 27 times but like so many other logical holes in your theory you don't even try to refute them, you just ignore them; and then repeat the exact same tired old discredited pronouncements with no more evidence to support them than the first time round.

 John K Clark 
 





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20100102/438c3e45/attachment.html>


More information about the extropy-chat mailing list