[ExI] Bard (i.e. LaMDA) admits it isn't sentient.
gordon.swobe at gmail.com
Wed Apr 5 20:09:43 UTC 2023
On Wed, Apr 5, 2023 at 1:48 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
Seriously, the only reason LLMs are able to write persuasively in the first
>> person like conscious individuals is that they have been trained on vast
>> amounts of text, much of it written in the first person by conscious
>> individuals. They are parrots.
>> As I wrote elsewhere, Sam Altman’s co-founder proposes a test for a
>> conscious language model in which it must be trained only on material that
>> is devoid of any all references to consciousness and subjective experience
>> and so on. If such an LLM suddenly started writing in the first person
>> about first person thoughts and experiences, that would be remarkable.
> You need to give your definition of consciousness before you can even
> begin to design a test for it.
As you probably know, Sam Altman is CEO of OpenAI, developer of GPT-4. He
and his co-founder Ilya Sutskever have considered these questions
carefully. The idea is that the training material must have no references
to self-awareness or consciousness or subjective experience or anything
related these ideas. Imagine for example that an LLM was trained only on a
giant and extremely thorough Encyclopedia Britannica, containing all or
almost all human knowledge, and which like any encyclopedia is almost
completely in the third person. Any definitions or articles in the
encyclopedia related consciousness and so on would need to be removed.
In Sutskever's thought experiment, the human operator makes some
interesting observation about the material in the encyclopedia and the LMM
remarks something like "I was thinking the same thing!" That would be a
proof of consciousness. I think it would also be a miracle because the LLM
will have invented the word "I" out of thin air.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat