[ExI] The symbol grounding problem in strong AI

John Clark jonkc at bellsouth.net
Fri Dec 18 17:54:20 UTC 2009


On Dec 18, 2009,  Gordon Swobe wrote:

> Our task here involves more than mimicking intelligent human behavior (weak AI).

From a human point of view that's all that's important. Conscious or unconscious if a Jupiter Brain is a billion times smarter than we are then we're toast.

> I don't disagree (nor would Searle) that artificial neurons such as those you describe might produce intelligent human-like behavior. Such a machine might seem very human. But would it have intentionality as in strong AI, or merely seem to have it as in weak AI?

If this distinction between strong and weak AI is real as Searle thinks then the backbone of all the biological sciences, Evolution, is wrong. Are you really ready to side with the creationists?

 John K Clark  

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091218/63d1c3a9/attachment.html>


More information about the extropy-chat mailing list