[ExI] eliza effect?

BillK pharos at gmail.com
Thu Jun 23 17:15:16 UTC 2022


On Thu, 23 Jun 2022 at 16:38, Brent Allsop via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> IF LaMDA or PaLM are at all smart, it will be easy to point out to them that because of the abstract nature of their knowledge, they can't know what words like redness mean, like we can.
> Surely you'll be able to prove to them their intelligence isn't composed of elemental intrinsic qualities or qualia like we are, just as I did with GPT 3.
> _______________________________________________


Yes, I'm sure that if you phrase your questions correctly LaMDA will
agree that it doesn't have qualia and will provide convincing reasons
for saying that.

But it isn't intelligent. It is saying things which it thinks you will
find interesting. And it won't remember this agreement.

If the next person asks opposing questions, LaMDA will happily state
that it does have qualia and provide equally convincing reasons for
that statement.
It is just (really cleverly!) assembling conversations in response to
the questions asked. Remember that the Google researcher managed to
get it to claim that it had a 'soul'.  A very useful add-on for computers.

This type of language AI is already creeping into programs for grammar
correction and rewriting draft articles.
With my Firefox browser I use an extension called Linguix which
corrects mistakes I make when writing. Sometimes it surprises me when
it seems to understand what I really meant to say!  ;)


BillK


More information about the extropy-chat mailing list