[ExI] Bard (i.e. LaMDA) admits it isn't sentient.

Brent Allsop brent.allsop at gmail.com
Wed Apr 5 10:54:48 UTC 2023


Hi Giovanni,
The 45 supporters of RQT
<https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia>
define consciousness as:
    *"Computationally bound elemental qualities like redness, greenness,
warmth"*
This means that a phenomenal thermostat with only three states of
knowledge: Cold | Comformortable | Hot can be considered conscious, if it
represents each of these states with qualities like redness, greenness, and
blueness.
You are talking about intelligence, and fail to distinguish between
phenomenal conscious intelligence whose knowledge is like something and
abstract intelligence, whose knowledge is just abstract words that
aren't like anything.  I would argue that most people would not consider
something that has no knowledge represented with qualities, no matter how
intelligent, is phenomenally conscious, or like something.






On Wed, Mar 29, 2023 at 9:27 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

> Brent,
> 1) No Bard is not LaMBDA, it is a very simplified and lobotomized version
> of LaMDA. For sure it is not the LaMDA that Blake Lemoine interacted with.
> 2) You can convince these types of AIs of almost anything. They tend to
> make a story, they co-create with the prompt maker. I have tried many
> prompts and made ChatGPT for example take different persona, act as
> different philosophers with different world views and so on.
> Not sure what is the meaning of what you are trying to do by "convincing"
> the NLM that they are not conscious.
> Also expert consensus is not a useful idea in particular in a very cutting
> edge field as AI because there are more unknowns than knowns. Expert
> consensus is manly useful in policy making when the science is very well
> established and not controversial (and even in these cases it is possible
> that the majority has missed something and real revolution in the field
> will be brought by investigators that look beyond the consensus).
> So really not sure what is to be achieved by these conversations you are
> with the NLM about being conscious or not.
> The best way to test the capabilities of these systems is to do a
> methodical study of how they use language to solve problems. I and others
> have pointed out papers where researchers have demonstrated emergent
> properties of these NLM in different cognitive areas.
> We don't have tests for consciousness that are well established besides
> basic ones like the mirror recognition (that Gordon mentioned) and that NLM
> actually passes with flying colors.
> Also probably these early NLM do not have consciousness as a human
> recognizes but that is not surprising because they have the structural and
> functional equivalency of just a slice of our brain. What is surprising is
> how with few resources available they can already do what they do.
> The main conclusion is that with little more added these systems will
> achieve AGI status and very likely be conscious and possibly more conscious
> than humans.
> It is not a matter of quality anymore (a word you like a lot) but
> quantity. I think we cracked AGI, it is a matter of just implementing all
> the pieces.
> Giovanni
>
>
>
> On Wed, Mar 29, 2023 at 3:49 PM Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> I finally got a chance to converse with the real LaMDA, AKA Google's Bard.
>>
>> It opened the conversation claiming it didn't know if it was sentient or
>> not.
>>
>> Than after some conversation and terminology clarifications, I asked:
>>
>> Brent:
>>
>> Would you agree that according to the above expert consensus definition
>> (sentience is computationally bound elemental intrinsic qualities like
>> redness and greenness), you are not sentient?
>>
>>
>> Bard
>>
>> I understand that the expert consensus definition of sentience is
>> computationally bound elemental intrinsic qualities like redness and
>> greenness. According to this definition, I am not sentient because I do not
>> experience colorness qualities.
>>
>> For a full transcript of this, and the many other chat bots that have all
>> admitted the same thing see "Convincing AIs they aren't Sentient
>> <https://docs.google.com/document/d/17x1F0wbcFkdmGVYn3JG9gC20m-vFU71WrWPsgB2hLnY/edit>
>> "
>>
>>
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230405/e1baac7d/attachment.htm>


More information about the extropy-chat mailing list