[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Thu Jan 7 22:13:49 UTC 2010


2010/1/8 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Thu, 1/7/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> There *must* be something uncomputable about the behaviour of neurons...
>
> No.

(Of course I don't claim that there must be something uncomputable
about neurons, it's only if, as you seem to be saying, p-neurons are
impossible that there must be something uncomputable about neurons.)

>>... if it can't be copied well enough to make p-neurons,
>> artificial neurons which behave exactly like b-neurons but lack the
>> essential ingredient for consciousness. This isn't a contingent fact,
>> it's a logical requirement.
>
> Yes and now you see why I claim Cram's surgeon must go in repeatedly to patch the software until his patient passes the Turing test: because the patient has no experience, the surgeon must keep working to meet your logical requirements. The surgeon finally gets it right with Service Pack 9076. Too bad his patient can't know it.

The surgeon will be rightly annoyed if the tweaking and patching has
not been done at the factory so that the p-neurons just work.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list