[ExI] Zombies
Gordon Swobe
gordon.swobe at gmail.com
Sat Apr 29 19:08:28 UTC 2023
On Sat, Apr 29, 2023 at 6:15 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> On Sat, Apr 29, 2023, 5:26 AM Ben Zaiboc via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> From my discussions with Brent and Gordon, they differ in their views.
>
We differ mostly in that Brent has some physicalist ideas about how, for
example, something like glutamate might explain the experience of redness.
Like many people here, I do not understand that point of view.
Brent I do agree, however, that a large language model cannot have a
conscience experience of redness. In terms of my arguments, just as it
cannot have conscious experience of color, it cannot have a conscious
understanding of the meanings of words (including words about color) based
only on its analysis of how words are arranged statistically in the
training corpus.
It can know only how to arrange those words in sentences and paragraphs
that have meaning *to us*, the end-users. This is what it learned from from
its deep machine learning.
And this incidentally is exactly what GPT-4 claims to do but people here
don’t believe it. I wonder what people here on ExI will say in the very
near future when all major language models “mature” to GPT-4’s level and
have the same understanding of language models as GPT-4 and me. Will people
here call all the AIs liars?
By the way, Jason, you were saying that the models at character.ai still
claim to be conscious. I went there and found that not to be the case.
Perhaps you can show me what you meant.
LLMs that claim consciousness are, in my view, just toys for entertainment.
They might make good romantic partners for lonely people with vivid
imaginations, but they are toys.
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230429/c1be73ad/attachment.htm>
More information about the extropy-chat
mailing list