[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Mon Dec 28 13:40:33 UTC 2009


2009/12/29 Stefano Vaj <stefano.vaj at gmail.com>:
> 2009/12/28 Stathis Papaioannou <stathisp at gmail.com>
>>
>> So (a) is incoherent and (b) implies the existence of an immaterial
>> soul that does your thinking in concert with the brain until you mess
>> with it by putting in artificial neurons. That leaves (c) as the only
>> plausible alternative.
>
> It sounds plausible enough to me.
>
> But, once more, isn't the whole issue pretty close to ko'an questions such
> as "what kind of noise makes a falling tree when nobody hears its falling?".
>
> What obfuscates the AGI debate is IMHO an abuse of  poorly defined terms
> such as "conscience", etc., in an implicitely metaphysical and essentialist,
> rather than phenomenical, sense. This does not bring one very far either in
> the discussion of "artificial" brains nor in the understanding of organic
> ones.

We don't need to define or explain consciousness in order to use the
term. I could say, I don't know what consciousness is, but I know it
when it's happening to me. So my question is, Will I still have
consciousness in this sense if my brain is replaced with an electronic
one that results in the same behaviour? And the answer is, Yes. That's
what the thought experiment I've described demonstrates.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list