[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Fri Jan 8 13:44:39 UTC 2010

--- On Thu, 1/7/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

>> Yes and now you see why I claim Cram's surgeon must go
> in repeatedly to patch the software until his patient passes
> the Turing test: because the patient has no experience, the
> surgeon must keep working to meet your logical requirements.
> The surgeon finally gets it right with Service Pack 9076.
> Too bad his patient can't know it.
> The surgeon will be rightly annoyed if the tweaking and
> patching has not been done at the factory so that the p-neurons just
> work.

My point here concerns the fact that because experience affects behavior including neuronal behavior, and because the patient presents with symptoms indicating no experience of understanding language, and because on my account p-neurons != c-neurons, the p-neurons cannot work as advertised "out of the box". The initial operation fails miserably. The surgeon must then keep reprogramming and replacing more natural neurons throughout the patient's brain. He succeeds eventually in creating intelligent and coherent behavior in his patient, but it costs the patient most or all his intentionality.



More information about the extropy-chat mailing list