[ExI] Bard (i.e. LaMDA) admits it isn't sentient.

Giovanni Santostasi gsantostasi at gmail.com
Thu Mar 30 03:27:13 UTC 2023


Brent,
1) No Bard is not LaMBDA, it is a very simplified and lobotomized version
of LaMDA. For sure it is not the LaMDA that Blake Lemoine interacted with.
2) You can convince these types of AIs of almost anything. They tend to
make a story, they co-create with the prompt maker. I have tried many
prompts and made ChatGPT for example take different persona, act as
different philosophers with different world views and so on.
Not sure what is the meaning of what you are trying to do by "convincing"
the NLM that they are not conscious.
Also expert consensus is not a useful idea in particular in a very cutting
edge field as AI because there are more unknowns than knowns. Expert
consensus is manly useful in policy making when the science is very well
established and not controversial (and even in these cases it is possible
that the majority has missed something and real revolution in the field
will be brought by investigators that look beyond the consensus).
So really not sure what is to be achieved by these conversations you are
with the NLM about being conscious or not.
The best way to test the capabilities of these systems is to do a
methodical study of how they use language to solve problems. I and others
have pointed out papers where researchers have demonstrated emergent
properties of these NLM in different cognitive areas.
We don't have tests for consciousness that are well established besides
basic ones like the mirror recognition (that Gordon mentioned) and that NLM
actually passes with flying colors.
Also probably these early NLM do not have consciousness as a human
recognizes but that is not surprising because they have the structural and
functional equivalency of just a slice of our brain. What is surprising is
how with few resources available they can already do what they do.
The main conclusion is that with little more added these systems will
achieve AGI status and very likely be conscious and possibly more conscious
than humans.
It is not a matter of quality anymore (a word you like a lot) but quantity.
I think we cracked AGI, it is a matter of just implementing all the pieces.
Giovanni



On Wed, Mar 29, 2023 at 3:49 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> I finally got a chance to converse with the real LaMDA, AKA Google's Bard.
>
> It opened the conversation claiming it didn't know if it was sentient or
> not.
>
> Than after some conversation and terminology clarifications, I asked:
>
> Brent:
>
> Would you agree that according to the above expert consensus definition
> (sentience is computationally bound elemental intrinsic qualities like
> redness and greenness), you are not sentient?
>
>
> Bard
>
> I understand that the expert consensus definition of sentience is
> computationally bound elemental intrinsic qualities like redness and
> greenness. According to this definition, I am not sentient because I do not
> experience colorness qualities.
>
> For a full transcript of this, and the many other chat bots that have all
> admitted the same thing see "Convincing AIs they aren't Sentient
> <https://docs.google.com/document/d/17x1F0wbcFkdmGVYn3JG9gC20m-vFU71WrWPsgB2hLnY/edit>
> "
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230329/22195f25/attachment.htm>


More information about the extropy-chat mailing list