[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Thu Jan 7 14:10:31 UTC 2010


--- On Thu, 1/7/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

> There *must* be something uncomputable about the behaviour of neurons...

No.

>... if it can't be copied well enough to make p-neurons,
> artificial neurons which behave exactly like b-neurons but lack the
> essential ingredient for consciousness. This isn't a contingent fact,
> it's a logical requirement.

Yes and now you see why I claim Cram's surgeon must go in repeatedly to patch the software until his patient passes the Turing test: because the patient has no experience, the surgeon must keep working to meet your logical requirements. The surgeon finally gets it right with Service Pack 9076. Too bad his patient can't know it.

-gts




      



More information about the extropy-chat mailing list