[ExI] evolution of consciousness

Stathis Papaioannou stathisp at gmail.com
Sat Feb 13 21:49:08 UTC 2010


On 14 February 2010 05:42, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> --- On Thu, 2/11/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> A computer model of the brain is made that controls a body,
>> i.e. a robot. The robot will behave exactly like a human.
>
> Such a robot may be made, yes.
>
>> Moreover, it will behave exactly like a human due to an isomorphism
>> between the structure and function of the brain and the structure and
>> function of the computer model, since that is what a model is. Now, you
>> claim that this robot would lack consciousness.
>
> I never made that claim. In fact I don't recall that you and I ever discussed robots until now. I suppose that you've made this claim above on my behalf.

You posted a link to a site that purported to show that not even a
robot which had input from the real world would have understanding.
Have you changed your mind about this?

>> This means that there is nothing about the intelligent behaviour of the
>> human that is affected by consciousness. For if consciousness were a
>> separate thing that affected behaviour, there would be some deficit in
>> behaviour if you reproduced the functional relationship between brain
>> components while leaving out the consciousness. Therefore, consciousness
>> must be epiphenomenal. You might have said that you rejected
>> epiphenomenalism, but you cannot do so consistently.
>
> No, I reject epiphenomenalism consistently. In the *actual* thought experiment that you proposed, in which the human patient presented with a lesion in Wernicke's area causing a semantic deficit and in which the surgeon used p-peurons, the patient DID suffer from a deficit after the surgery. And that deficit was due precisely to the fact that he would lack the experience of his own understanding, which would in turn affect his behavior. I.e., epiphenomenalism is false. This is why I stated multiple times that the surgeon would need to keep working until he finally would get the patient's behavior right. In the end his patient would pass the turing test yet still have no conscious understanding of words.

The patient COULD NOT suffer from a deficit after the surgery.
Otherwise you would be saying that both P and ~P are true, where P =
"the p-neurons exactly reproduce the I/O behaviour of the biological
neurons".

>> The only way you can consistently maintain your position
>> that computers can't reproduce consciousness is to say that they
>> can't reproduce intelligence either.
>
> Not so.
>
>> If you don't agree with this you must explain why I am wrong when I
>> point out the self-contradictions that zombies would lead to, and you
>> simply avoid doing this,
>
> I just don't see the contradictions that you see, Stathis.
>
> Let me ask you this: your general position seems to be that weak AI is false; that if weak AI is possible then strong AI must also be possible, because the distinction between weak and strong is false and anything that passes the Turing must have strong AI. Is that your position?

Yes.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list