[ExI] Language models are like mirrors
gordon.swobe at gmail.com
Mon Apr 3 03:16:46 UTC 2023
On Sun, Apr 2, 2023 at 8:36 PM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
To achieve sentience, it will need to be able to learn, and not just during
> the training period. It will also need access to tools, like math
I learned from watching an interview of Sam Altman of OpenAI/ChatGPT fame
that he approves of his co-founder Ilya Sutskever's test for consciousness
in an AI language model. I doubt anyone will ever attempt the experiment,
but I also think it makes sense at least in principle.
Altman was paraphrasing Sutskever and I am paraphrasing Altman, but it goes
something like this:
Because any language model trained on human discourse will pick up on such
terms as "consciousness" and "subjective experience" and so on, and will
try to mimic such language, and find patterns in the language associated
with such terms and mimic also those patterns of language, the data set on
which the test model would be trained must be completely devoid of any all
I'm not personally sure if such a dataset is even possible, but say a
language model is trained on it.
Now, if in conversation with the LLM, the human operator made some
interesting observation about the data and the LLM responded something like
"Yes, I was thinking the same thing!" THAT would be evidence that it is
I noticed that Eliezer Yudkowski mentioned this hypothetical test in his
interview with Lex Fridman the other day, so probably either he borrowed it
from Sutskever or Sutskever borrowed it from Yudkowski, but the idea is
making the rounds.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat