[ExI] The symbol grounding problem in strong AI.

John Clark jonkc at bellsouth.net
Sun Jan 3 16:39:21 UTC 2010


On Jan 3, 2010, Stathis Papaioannou wrote:

> This is not true if it is impossible to create intelligent behaviour
> without consciousness using biochemistry, but possible using
> electronics, which evolution had no access to. I point this out only
> for the sake of logical completeness, not because I think it is
> plausible.

Even in that case it would indicate that it would be easier to make a conscious intelligence than a unconscious one, so it would seem wise that when you encounter intelligence your default assumption should be that consciousness is behind it. Searle assumes the opposite, he assumes unconsciousness regardless of how brilliant an intelligence may be unless consciousness is proven; the catch-22 of course is that consciousness can never be proven.

Also, if it's the biochemistry inside the neuron that mysteriously generates consciousness and not the signals between neurons that a computer could simulate then each neuron is on its own as far as consciousness is concerned. One neuron would be sufficient to produce consciousness, it would have to be because they can't work together on this project. If you allow one neuron to have consciousness even though it has no intelligence it would be a very small step to insist that rocks which are no dumber than neurons have it too. So now we have intelligence without consciousness and consciousness without intelligence and rocks with feelings; that is not a position I'd be comfortable defending. 

As I said before creationists correctly say that life and intelligence are too grand to have come about by chance, but Searle says that's exactly how biology came up with consciousness.

 John K Clark



 

 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20100103/32199831/attachment.html>


More information about the extropy-chat mailing list