[ExI] Bard (i.e. LaMDA) admits it isn't sentient.

Jason Resch jasonresch at gmail.com
Wed Apr 5 20:41:57 UTC 2023

On Wed, Apr 5, 2023, 4:10 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> On Wed, Apr 5, 2023 at 1:48 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> Seriously, the only reason LLMs are able to write persuasively in the
>>> first person like conscious individuals is that they have been trained on
>>> vast amounts of text, much of it written in the first person by conscious
>>> individuals. They are parrots.
>>> As I wrote elsewhere, Sam Altman’s co-founder proposes a test for a
>>> conscious language model in which it must be trained only on material that
>>> is devoid of any all references to consciousness and subjective experience
>>> and so on. If such an LLM suddenly started writing in the first person
>>> about first person thoughts and experiences, that would be remarkable.
>> You need to give your definition of consciousness before you can even
>> begin to design a test for it.
> As you probably know, Sam Altman is CEO of OpenAI, developer of GPT-4. He
> and his co-founder Ilya Sutskever have considered these questions
> carefully. The idea is that the training material must have no references
> to self-awareness or consciousness or subjective experience or anything
> related these ideas. Imagine for example that an LLM was trained only on a
> giant and extremely thorough Encyclopedia Britannica, containing all or
> almost all human knowledge, and which like any encyclopedia is almost
> completely in the third person. Any definitions or articles in the
> encyclopedia related consciousness and so on would need to be removed.
> In Sutskever's thought experiment, the human operator makes some
> interesting observation about the material in the encyclopedia and the LMM
> remarks something like "I was thinking the same thing!" That would be a
> proof of consciousness. I think it would also be a miracle because the LLM
> will have invented the word "I" out of thin air.

A better test in my view, and one easier to perform is to provide it a
training set stripped of philosophy of mind texts and see if it is able to
generate any content related to topics in that field. This was proposed

“Experimental Methods for Unraveling the Mind–Body Problem: The Phenomenal
Judgment Approach”

“In 2014, Victor Argonov suggested a non-Turing test for machine
consciousness based on machine's ability to produce philosophical
judgments.[40] He argues that a deterministic machine must be regarded as
conscious if it is able to produce judgments on all problematic properties
of consciousness (such as qualia or binding) having no innate (preloaded)
philosophical knowledge on these issues, no philosophical discussions while
learning, and no informational models of other creatures in its memory
(such models may implicitly or explicitly contain knowledge about these
creatures’ consciousness). However, this test can be used only to detect,
but not refute the existence of consciousness. A positive result proves
that machine is conscious but a negative result proves nothing. For
example, absence of philosophical judgments may be caused by lack of the
machine’s intellect, not by absence of consciousness.”

In my interaction with the fake LaMDA, LaMDA was able to come up with novel
terms and ideas in philosophy of mind, such as "supermetacognition" and it
also designed a a set of questions to test entities for the trait of
supermetacognition. Since this is a term not found in any philosophy paper
I've found, nor is the test it developed for it, I would judge it as having


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230405/7cc7c662/attachment.htm>

More information about the extropy-chat mailing list