[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Tue Dec 29 14:01:17 UTC 2009


2009/12/29 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Mon, 12/28/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> You claim both that the physics of neurons is computable
>
> Yes.
>
>> AND that it is impossible to make program-driven neurons that behave
>> like natural neurons, which is a contradiction.
>
> No you misunderstood me, and I should have made myself more clear. I meant that your artificial neurons in your experiment would not act as would have the natural neurons that they replaced -- not that they would act in a manner uncharacteristic of neurons.
>
>> Even Searle agrees that you can  make artificial neurons that behave
>> like natural neurons,
>
> As do I. That was the #2 point in my post to you yesterday. I quote myself here:
>
> "2) I believe we can in principle create neurons "based on" those computer blueprints, just as we can make anything from blueprints, and that those manufactured neurons will behave exactly like natural neurons."

You leave me confused. Would the artificial neuron behave like a
natural neuron or would it not? It seems to me that you have to agree
that it would. After all, you agree with Searle that a computer could
in theory fool a human into thinking that it was a fellow human, and
just fooling the adjacent neurons into thinking it's one of them
should be a vastly easier task. So what do you mean by saying the
artificial neurons "would not act as would have the natural neurons
that they replaced"?

> To my way of thinking, your cyborg-like thought experiment takes a single snapshot of a process. You want to focus on that single snapshot but I look at the entire process. In that process you have me changing into a computer simulation. While the circumstance pictured in that single snapshot seems odd to me subjectively, to you as the objective observer everything would seem quite normal.
>
> At the end of that process I no longer exist as an intentional entity. Although that simulation of me exhibits all the objective characteristics of a person with intentionality, my simulated consciousness no longer has a first-person ontology.

I operate on your brain and install my artificial neurons in place of
a volume of tissue involved in some important aspect of cognition,
such as visual perception or language. Your brain and hence you would
have to behave normally if my artificial neurons behave normally, by
definition. But although I can make my artificial neurons behave
normally, you claim that I can't imbue them with intentionality. So,
will you feel normal or won't you? It seems you are suggesting that
you would not feel normal, but would instead feel that something weird
was happening. Can you explain how you would feel this - where the
acknowledgment of the weird feeling will physically occur in your
brain - given that all your neurons are forced to behave as they would
have if no change had been made?


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list