[ExI] eliza effect?
BillK
pharos at gmail.com
Thu Jun 23 09:48:36 UTC 2022
On Thu, 23 Jun 2022 at 05:33, spike jones via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> https://www.wired.com/story/blake-lemoine-google-lamda-ai-bigotry/
> _______________________________________________
LaMDA fools people because it was explicitly designed to do that.
<https://www.androidpolice.com/what-is-google-lamda/>
Quotes:
How Google's LaMDA AI works, and why it seems so much smarter than it is
By Ryne Hager Published 16 June 2022
A short history of Google's language-processing AI efforts
Like many of ML-based systems, rather than generate a single response,
LaMDA creates multiple candidates and picks what different internal
ranking systems choose as the best one, so when it’s asked a question,
it doesn’t “think” along a single path to one answer, it creates
several of them, with another model choosing which scores highest on
that SSI score we mentioned, actively trying to pick out the most
interesting, insightful, and curious answers.
At a fundamental level, LaMDA isn’t just a software-based machine;
it’s a machine that was explicitly made and trained to provide the
most human-like answers possible through a selection process that’s
meant to literally please humans into believing its responses came
from one. That was the expressed goal here, so should we be surprised
if it succeeds in doing that? We built it with this purpose in mind.
Ultimately, LaMDA’s responses can beat average human responses on its
interestingness metric in Google’s testing and come very close in
sensibleness, specificity, and safety metrics (though it still falls
short in other areas).
And LaMDA isn't even the end here. As touched on before, Google's
shiny new PaLM system has capabilities LaMDA can't approach, like the
ability to prove its work, write code, solve text-based math problems,
and even explain jokes, with a parameter "brain" that's almost four
times as big. On top of that, PaLM acquired the ability to translate
and answer questions without being trained specifically for the task —
the model is so big and sophisticated, the presence of related
information in the training dataset was enough.
==============
Google has big plans for these advanced chatbots.
I can see a time coming when almost every communication from Google
will be coming from a chatbot. Search results, PR reports, News items,
etc,
One big problem is the 'people-pleasing' objective. Do we want to be
'pleased' with the response? Or do we want an objective response, even
when we might not like the 'truth' that is being told to us? These
advanced chatbots are becoming very powerful persuasion machines.
Misuse seems inevitable.
BillK
More information about the extropy-chat
mailing list