[ExI] Meaningless Symbols.

Ben Zaiboc bbenzai at yahoo.com
Mon Jan 11 16:18:00 UTC 2010


Damien Broderick <thespike at satx.rr.com> wrote:

On 1/10/2010 7:27 PM, Stathis Papaioannou wrote:

>> Gordon has in mind a special sort of understanding which makes no
>> objective difference and, although he would say it makes a subjective
>> difference, it is not a subjective difference that a person could
>> notice.
>
> I have a sneaking suspicion that what is at stake is volitional
> initiative, conscious weighing of options, the experience of assessing
> and then acting. Yes, we know a lot of this experience is illusory, or
> at least misleading, because a large part of the process of "willing" is
> literally unconscious and precedes awareness, but still one might hope
> to have a machine that is aware of itself as a person, not just a tool
> that shuffles through canned responses--even if that can provide some
> simulation of a person in action. It might turn out that there's no
> difference, once such a complex machine is programmed right, but until
> then it seems to me fair to suppose that there could be. None of this
> concession will satisfy Gordon, I imagine.


I have a very strong suspicion that a tool that shuffles through canned responses can never even approach the performance of a self-aware person, and only another self-aware person can do that.  If you program your complex machine right, you will have to include whatever features give it self-awareness, consciousness, or whatever you want to call it.

In other words, philosophical zombies are impossible, because the only way of emulating a self-aware entity is to in fact be one.

Ben Zaiboc


      



More information about the extropy-chat mailing list