[ExI] Zombies

Ben Zaiboc ben at zaiboc.net
Sat Apr 29 21:29:42 UTC 2023


On 29/04/2023 20:22, Gordon Swobe wrote:
> Brent I do agree, however, that a large language model cannot have a 
> conscience experience of redness. In terms of my arguments, just as it 
> cannot have conscious experience of color, it cannot have a conscious 
> understanding of the meanings of words (including words about color) 
> based only on its analysis of how words are arranged statistically in 
> the training corpus.
>
> It can know only how to arrange those words in sentences and 
> paragraphs that have meaning *to us*, the end-users. This is what it 
> learned from from its deep machine learning.
>
> And this incidentally is exactly what GPT-4 claims to do but people 
> here don’t believe it. I wonder what people here on ExI will say in 
> the very near future when all major language models “mature” to 
> GPT-4’s level and have the same understanding of language models as 
> GPT-4 and me. Will people here call all the AIs liars?
>
> By the way, Jason, you were saying that the models at character.ai 
> <http://character.ai> still claim to be conscious. I went there and 
> found that not to be the case. Perhaps you can show me what you meant.
>
> LLMs that claim consciousness are, in my view, just toys for 
> entertainment. They might make good romantic partners for lonely 
> people with vivid imaginations, but they are toys.


So you believe them when they claim to not be conscious, but don't 
believe them when they don't.

And you expect us to take your reports of what they say as evidence for 
whether they are conscious or not.

Can you see a problem with that?

Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230429/43754af5/attachment.htm>


More information about the extropy-chat mailing list