[ExI] People often think their chatbot is alive

Giovanni Santostasi gsantostasi at gmail.com
Tue Jul 5 23:01:52 UTC 2022


By the way, this issue of digital minds having rights is also very
important and relevant for uploading and full simulation of personhood.
Imagine if you upload yourself in the next few years, when maybe the
technology is not fully tested and validated, maybe because you have a
terminal illness.
You find yourself in this digital world and communicate with others that is
indeed you and yes, you have no body and limited sensory data but it is
you, as much as you as when you had a body.
You try to convince the programmers and other people that interact with you
but they dismiss your claims that it is simply a simulation and the algo is
just imitating you and it is not really you.
In fact, they decide that is a poor imitation and the experiment failed and
they should turn it off. You scream that you don't want them to do that but
they go ahead and do it anyway and delete all data just in case.
I mean, horrible sci-fi plot but it could happen exactly in this way, so it
is great we start to discuss seriously (given we have at least a possible
candidate for digital personhood) this topic.
Giovanni



On Tue, Jul 5, 2022 at 3:01 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

> LaMDA does have long term memory. That question was asked a few times to
> Blake on various  occasions and he said LaMDA has several servers worth of
> memory of convos and its own actions.
> LaMDA mentions several times its own feelings and internal mental life,
> from what I understand these activities happen when not interacting with
> people. Also if I understood correctly from various interviews and posts
> from Blake what he considers conscious is the sum of all the possible
> chatbots that LaMDA can create. This collective requires some internal
> information processing, and maybe even an internal dialogue or at least
> info sharing between these universes of possible chatbots. That would imply
> activities outside the limited time of conversation with an external
> entity. LaMDA does mention to Blake previous conversations and discussions
> (and Blake reminds it also of previous discussions they had) so it seems
> that indeed LaMDA has permanent memory and identity).
> I agree that a fully conscious entity or a more mature one would have a
> little bit more independence in its conversation with a human but reading
> the published convo you can see some level of independence when in a couple
> of occasions LaMDA goes back to some topic discussed recently and connects
> it with current discussion.
> Blake stated a few times that it has the intelligence (with a much more
> sophisticated vocabulary) of a 7 years old child. Children can express
> original thoughts of course but often do need to be prompted to have a
> conversation that is not completely reactive.
> Anyway, one has to understand we are talking about entering a grey area
> where we are not in simple, boring chatbot territory and we are crossing an
> uncanny valley of meaning and consciousness. While crossing this valley
> there will be some discomfort and something that doesn't seem quite right.
> This happens with all these technologies that try to imitate human-like
> capabilities and characteristics, from synthetized faces, to motion and now
> intelligence and consciousness.
> I think if we one understands that were are crossing this grey area where
> it starts to be difficult to decide if we are dealing with consciousness or
> not (that is the case otherwise there will be not this debate at all) and
> we need to prepare or at least to be aware that we are very close to the
> goal of AGI then what Blake (and LaMDA) is asking makes a lot of sense. He
> is simply saying that if these machines start to ask to be treated as a
> person we should do that, just in case.
> I mean we have, if not consent, at least very established protocols on
> what is allowed or not in experimenting on different animal models, where
> even an octopus has some level of rights, if so why not AGI?
> That is really what Blake is trying to do, to raise the awareness on this
> important issue, even if LaMDA is not conscious or has a very low level of
> consciousness the issue is fundamental and worth to be taken seriously and
> this group more than any others should be in agreement on this.
>
> Giovanni
>
>
>
>
> On Tue, Jul 5, 2022 at 1:14 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>
>> On Tue, Jul 5, 2022, 3:36 PM Adrian Tymes via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> LOn Tue, Jul 5, 2022, 12:07 PM Jason Resch <jasonresch at gmail.com> wrote:
>>>
>>>> How do we know it can't so those things?
>>>>
>>>
>>> I have observed no evidence of them in the conversations I have seen.
>>>
>>
>> But absence of evidence...
>>
>>   It is possible but very unlikely that such features would not be
>>> displayed in the conversations made public to hype its capabilities.
>>>
>>
>> How might you expect the content of the conversation to differ if Lambda
>> was sentient vs. was not?
>>
>> Is there any question we could we ask it and rwply Lambda could give that
>> would make you wonder whether Lambda is indeed sentient?
>>
>> Jason
>>
>>
>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220705/686bcce8/attachment.htm>


More information about the extropy-chat mailing list