gordon.swobe at gmail.com
Sun Apr 30 05:08:20 UTC 2023
On Sat, Apr 29, 2023 at 4:09 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>> By the way, Jason, you were saying that the models at character.ai still
>> claim to be conscious. I went there and found that not to be the case.
>> Perhaps you can show me what you meant.
Is this version of LaMDA online? I'm looking for live examples of LLMs that
(still) claim consciousness. I thought you wrote that they exist at
character.ai, but the one I tried there gave answers similar to GPT to the
I want to interrogate one. :)
I'm not sure if you saw my response to the conversation you had with LaMDA
about its understanding of word meanings. Its positive response was
nonsensical. LaMDA claimed that its creators provided it with a database of
word meanings, which is impossible. Any such database would consist of
written word *definitions*, leading to an endless cycle of looking up
definitions for the words within those definitions, and so on in
an endless search for meaning.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat