<div dir="ltr">Political activists in this country now make a regular practice of grudgingly admitting that their opponents can form grammatical sentences, while enthusiastically denying that there is an agentic mind behind the words.</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Jun 12, 2022 at 2:26 PM BillK via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Sun, 12 Jun 2022 at 18:20, Adrian Tymes via extropy-chat<br>
<<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br>
><br>
> <a href="https://www.dailymail.co.uk/news/article-10907853/Google-engineer-claims-new-AI-robot-FEELINGS-Blake-Lemoine-says-LaMDA-device-sentient.html" rel="noreferrer" target="_blank">https://www.dailymail.co.uk/news/article-10907853/Google-engineer-claims-new-AI-robot-FEELINGS-Blake-Lemoine-says-LaMDA-device-sentient.html</a><br>
><br>
> I suspect the claim is a bit beyond what the evidence supports.<br>
> _______________________________________________<br>
<br>
<br>
That's correct. The new LaMDA chatbot is not sentient.<br>
But it is superb at convincing humans that it is sentient.<br>
Long article here, including sample LaMDA conversations as well.<br>
<br>
<<a href="https://www.zerohedge.com/technology/google-engineer-placed-leave-after-insisting-companys-ai-sentient" rel="noreferrer" target="_blank">https://www.zerohedge.com/technology/google-engineer-placed-leave-after-insisting-companys-ai-sentient</a>><br>
<br>
Quotes:<br>
"We now have machines that can mindlessly generate words, but we<br>
haven’t learned how to stop imagining a mind behind them," said<br>
University of Washington linguistics professor, Emily M. Bender, who<br>
added that even the terminology used to describe the technology, such<br>
as "learning" or even "neural nets" is misleading and creates a false<br>
analogy to the human brain.<br>
<br>
As Google's Gabriel notes, "Of course, some in the broader AI<br>
community are considering the long-term possibility of sentient or<br>
general AI, but it doesn’t make sense to do so by anthropomorphizing<br>
today’s conversational models, which are not sentient. These systems<br>
imitate the types of exchanges found in millions of sentences, and can<br>
riff on any fantastical topic."<br>
<br>
In short, Google acknowledges that these models can "feel" real,<br>
whether or not an AI is sentient.<br>
------------------<br>
<br>
Reading the sample LaMDA conversations shows how these chatbots do<br>
sound very like a human intelligence talking.<br>
<br>
<br>
BillK<br>
<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>