[ExI] eliza effect?

Brent Allsop brent.allsop at gmail.com
Thu Jun 23 15:35:31 UTC 2022


IF LaMDA or PaLM are at all smart, it will be easy to point out to them
that because of the abstract nature of their knowledge, they can't know
what words like redness mean, like we can.
Surely you'll be able to prove to them their intelligence isn't composed of
elemental intrinsic qualities or qualia like we are, just as I did with GPT
3
<https://docs.google.com/document/d/17x1F0wbcFkdmGVYn3JG9gC20m-vFU71WrWPsgB2hLnY/edit>
.




On Thu, Jun 23, 2022 at 3:50 AM BillK via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Thu, 23 Jun 2022 at 05:33, spike jones via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> >
> > https://www.wired.com/story/blake-lemoine-google-lamda-ai-bigotry/
> > _______________________________________________
>
>
> LaMDA fools people because it was explicitly designed to do that.
>
> <https://www.androidpolice.com/what-is-google-lamda/>
>
> Quotes:
> How Google's LaMDA AI works, and why it seems so much smarter than it is
> By Ryne Hager    Published 16 June 2022
>
> A short history of Google's language-processing AI efforts
>
> Like many of ML-based systems, rather than generate a single response,
> LaMDA creates multiple candidates and picks what different internal
> ranking systems choose as the best one, so when it’s asked a question,
> it doesn’t “think” along a single path to one answer, it creates
> several of them, with another model choosing which scores highest on
> that SSI score we mentioned, actively trying to pick out the most
> interesting, insightful, and curious answers.
>
> At a fundamental level, LaMDA isn’t just a software-based machine;
> it’s a machine that was explicitly made and trained to provide the
> most human-like answers possible through a selection process that’s
> meant to literally please humans into believing its responses came
> from one. That was the expressed goal here, so should we be surprised
> if it succeeds in doing that? We built it with this purpose in mind.
>
> Ultimately, LaMDA’s responses can beat average human responses on its
> interestingness metric in Google’s testing and come very close in
> sensibleness, specificity, and safety metrics (though it still falls
> short in other areas).
>
> And LaMDA isn't even the end here. As touched on before, Google's
> shiny new PaLM system has capabilities LaMDA can't approach, like the
> ability to prove its work, write code, solve text-based math problems,
> and even explain jokes, with a parameter "brain" that's almost four
> times as big. On top of that, PaLM acquired the ability to translate
> and answer questions without being trained specifically for the task —
> the model is so big and sophisticated, the presence of related
> information in the training dataset was enough.
> ==============
>
>
> Google has big plans for these advanced chatbots.
> I can see a time coming when almost every communication from Google
> will be coming from a chatbot. Search results, PR reports, News items,
> etc,
> One big problem is the 'people-pleasing' objective.  Do we want to be
> 'pleased' with the response? Or do we want an objective response, even
> when we might not like the 'truth' that is being told to us?  These
> advanced chatbots are becoming very powerful persuasion machines.
> Misuse seems inevitable.
>
>
> BillK
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220623/30471695/attachment.htm>


More information about the extropy-chat mailing list