[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Mon Dec 28 21:15:21 UTC 2009



On 29/12/2009, at 5:30 AM, Stefano Vaj <stefano.vaj at gmail.com> wrote:

> 2009/12/28 Stathis Papaioannou <stathisp at gmail.com>
> So my question is, Will I still have
> consciousness in this sense if my brain is replaced with an electronic
> one that results in the same behaviour? And the answer is, Yes. That's
> what the thought experiment I've described demonstrates.
>
>
> Yes.
>
> Or you might already be a philosophical zombie, in which case  
> neither you (by definition, not being "conscious") nor I (because we  
> are restricted to dealing with *phenomena*, any hypothetical Ding-an- 
> sich being unattainable anyway) would know anything about that.
>
> This is another way to say that philosophical zombies, either  
> "natural" or electronic, cannot be part of anybody's reality.

I wouldn't know it if I were or suddenly became a zombie, but I would  
know it if, not currently being a zombie, part of my brain were  
replaced with zombie components.

--
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091229/62c24837/attachment.html>


More information about the extropy-chat mailing list