[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Tue Dec 29 02:26:23 UTC 2009


--- On Mon, 12/28/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> Before proceeding, I would like you to say what you think
> you would experience if some of your neurons were replaced with
> artificial neurons that behave externally like biological neurons but,
> being tainted with programming, lack understanding.

Again it looks as if you've asked me to predict the outcome of a logical impossibility.

Similar to your last cyborg-like experiment, you have tampered with or completely short-circuited the feedback loop between the subject's behavior, including the behavior of his neurons, and his understanding. And so contrary to the wording in your question, the program-driven neurons will not "behave externally like biological neurons". 

What will "I" experience in the midst of this highly dubious and seemingly impossible state of affairs, you ask? I dare not even guess. Unclear that "I" should even exist, and then only because you included the word "some".

-gts





      



More information about the extropy-chat mailing list