[ExI] The symbol grounding problem in strong AI

Eugen Leitl eugen at leitl.org
Sun Dec 20 09:12:32 UTC 2009


On Sun, Dec 20, 2009 at 04:37:54PM +1100, Stathis Papaioannou wrote:

> > What different system?
> >
> > If you mean the natural brain, (the only different system known to have understanding), then it doesn't matter whether we can recognize its processes as algorithmic. Any computer running those possible algorithms would have no understanding.
> >
> > More generally, computer simulations of things do not equal the things they simulate.

Still having fun? Still think you're having an argument? 
In reality you don't. After much back and forth everybody's
positions will be exactly where they've been before.

So save the wear on your fingertips and on our retinas.
 
> But it seems that you and Searle are saying that the CR lacks
> understanding *because* the man lacks understanding of Chinese,
> whereas the brain, with completely dumb components, has understanding.
> So you are penalising the CR because it has smart components and
> because what it does has an algorithmic pattern. By this reasoning, if
> neurons had their own separate rudimentary intelligence and if someone
> could see a pattern in the brain's functioning to which the term
> "algorithmic" could be applied, then the brain would lack
> understanding also.

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE



More information about the extropy-chat mailing list