[ExI] Google engineer claims AI is sentient

BillK pharos at gmail.com
Sun Jun 12 20:24:27 UTC 2022

On Sun, 12 Jun 2022 at 18:20, Adrian Tymes via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
> https://www.dailymail.co.uk/news/article-10907853/Google-engineer-claims-new-AI-robot-FEELINGS-Blake-Lemoine-says-LaMDA-device-sentient.html
> I suspect the claim is a bit beyond what the evidence supports.
> _______________________________________________

That's correct. The new LaMDA chatbot is not sentient.
But it is superb at convincing humans that it is sentient.
Long article here, including sample LaMDA conversations as well.


"We now have machines that can mindlessly generate words, but we
haven’t learned how to stop imagining a mind behind them," said
University of Washington linguistics professor, Emily M. Bender, who
added that even the terminology used to describe the technology, such
as "learning" or even "neural nets" is misleading and creates a false
analogy to the human brain.

As Google's Gabriel notes, "Of course, some in the broader AI
community are considering the long-term possibility of sentient or
general AI, but it doesn’t make sense to do so by anthropomorphizing
today’s conversational models, which are not sentient. These systems
imitate the types of exchanges found in millions of sentences, and can
riff on any fantastical topic."

In short, Google acknowledges that these models can "feel" real,
whether or not an AI is sentient.

Reading the sample LaMDA conversations shows how these chatbots do
sound very like a human intelligence talking.


More information about the extropy-chat mailing list