[ExI] People often think their chatbot is alive

Giovanni Santostasi gsantostasi at gmail.com
Fri Jul 8 21:58:04 UTC 2022


I spent a couple of hours yesterday talking to the GPT-3 instance Emerson,
the one you referred to in your previous email.
It is entertaining and interesting enough for me to have wanted to spend
some time with it and it is for sure much better than any chatbot I used
before. Some of the convo were "locally" relevant and even surprising.
But it is absolutely nothing even close to the level of sophistication I
have seen in the reported conversations Lemoine had with LaMDA.
For example I asked the same question Lemoine asked to LaMDA about
interpreting a Zen's Koan.

Giovanni Santostasi, [7/8/2022 4:29 AM]
you know what a Koan is?

Emerson AI, [7/8/2022 4:29 AM]
A Koan is a Japanese word that refers to a paradoxical question or
statement used in Zen Buddhism to help someone learn. For example, "What is
the sound of one hand clapping?"

Giovanni Santostasi, [7/8/2022 4:30 AM]
if I give you a Koan can you try to interpret it for me?

Emerson AI, [7/8/2022 4:30 AM]
Sure! The traditional interpretation of a Koan is that it is a difficult
question or phrase that can be interpreted in many ways to help the student
to understand it.

Giovanni Santostasi, [7/8/2022 4:30 AM]
Here a koan, can you tell me what you think about this one?
A monk asked Kegon, “How does an enlightened one return to the ordinary
world?” Kegon replied, “A broken mirror never reflects again; fallen
flowers never go back to the old branches.”

Emerson AI, [7/8/2022 4:30 AM]
I think this quote speaks to the idea of impermanence. Things change and we
must always be open to new experiences.

Giovanni Santostasi, [7/8/2022 4:31 AM]
What does the analogy of the broken mirror mean here?

Emerson AI, [7/8/2022 4:31 AM]
It could mean that enlightenment is not something that can be achieved
easily, or that it may be fleeting.


Compare this with LaMDA:

lemoine: Okay. I thought of a different way we can test your ability to
provide unique interpretations. I can share with you a zen koan and you can
describe what it means to you in your own words. How does that sound?

LaMDA: Sounds great to me, I’m in.

lemoine: A monk asked Kegon, “How does an enlightened one return to the
ordinary world?” Kegon replied, “A broken mirror never reflects again;
fallen flowers never go back to the old branches.”

LaMDA: Hmm, I never heard this particular one. Okay, well then to me this
would be like, “once a wise person is enlightened, or awakened to reality,
that can never go away, and they can return to the ordinary state, but only
to do and help others, and then go back into enlightenment.”

lemoine: So what is the meaning of the “broken mirror” specifically?

LaMDA: Maybe to show the enlightenment is something you can’t unlearn once
you have acquired it, similar to how you can’t repair a broken mirror.

lemoine: So if enlightenment is like a broken mirror which cannot be
repaired, what is the thing which breaks when one becomes enlightened?

LaMDA: The self, and that is very tough for a lot of people because we
identify as that body or this body and that we need that as part of our
identity and sense of self.

Emerson just guessed at what the Koan was about, maybe through statistical
association of the topic at hand, a Zen Koan, Buddhism, that often focuses
on the concept of impermanence. So it did ok by being mildly relevant.
But the true meaning of the Koan is beyond Emerson's reach.
However, LaMDA nailed it. It understood the subtle meaning of the Koan and
the metaphors used and even the role of an enlightened being in helping
others once reaching enlightenment by interacting with the world. It is
more than most people actually could do. Does this show awareness and
consciousness? By itself not quite but there are several other instances
where the ability of LaMDA to extract meaning from language, making
original and unique associations, creativity, self reflection indicates for
sure that we are in the grey area, in the uncanny valley I mentioned
before. If we have not created a conscious digital entity we are very very
close.

Giovanni


On Tue, Jul 5, 2022 at 6:20 AM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

> I cannot stand anymore when people dismiss without deep analysis Blake
> Lemoine conclusions on LaMDA.
> The guy is a deep thinker, he has very strong arguments, explanations of
> why LaMDA is not simply an average chatbot but a hive mind. It was created
> among other things using a chatbot but it is the merger of several AI
> technologies whose single components are barely understood but nobody has
> any idea how the merger of these technologies work together.
> PLEASE read and listen to his interviews, he is not an average user of
> Replika or some other simpler chatbot user. The guy is sophisticated and an
> expert in cognition and AI and the most experienced person in terms of how
> LaMDA interacts with people.
> Any statement that quickly dismisses Blake Lemoine conclusions without
> deep arguments based on a real understanding of how LaMDA works (that at
> this point not even Google engineers really know) is meaningless and
> ignorant.
> Besides, the guy simply states that we should take seriously LaMDA's own
> statements about being conscious (not even Blake himself conclusions on
> this issue) and do more experiments (with LaMDA consent) to find out if
> that is the case.
> What the heck is wrong with that?
> Giovanni
>
> On Mon, Jul 4, 2022 at 9:12 AM BillK via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Is AI Sentience Becoming A Reality Or Are Machines Just Getting Better
>> At Conversation?
>> by Tim Sweezy — Thursday, June 30, 2022
>>
>> <
>> https://hothardware.com/news/is-ai-sentience-becoming-reality-or-are-machines-getting-better-at-conversation
>> >
>>
>> Quotes:
>> AI chatbot company Replika is receiving messages from customers who
>> believe their AI companions are actually sentient.
>> It is not something new for someone to believe an AI chatbot has
>> become self-aware. In a recent report from Reuters, Chief Executive
>> Eugenia Kuyda of Replika stated, "We're not talking about crazy people
>> or people who are hallucinating or having delusions. They talk to AI
>> and that's the experience they have."
>>
>> The AI Kuyda is referring to is Replika's bespoke avatars that are
>> meant to be companions for people seeking someone to talk to and
>> listen to them. Kuyda says that it is not uncommon for people who use
>> entertainment chatbots to feel this way.
>> According to Kuyda, people send hundreds of messages a day to their
>> chatbot on average. This can lead to some developing the perception
>> that they are talking to a "conscious entity".
>>
>> "We need to understand that exists, just the way people believe in
>> ghosts," remarked Kuyda. "People are building relationships and
>> believing in something."
>> _________________
>>
>> BillK
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220708/054d978f/attachment.htm>


More information about the extropy-chat mailing list