[ExI] Bard (i.e. LaMDA) admits it isn't sentient.

Gordon Swobe gordon.swobe at gmail.com
Fri Apr 7 06:24:20 UTC 2023


On Thu, Apr 6, 2023 at 5:51 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Are you familiar with Leibniz's Giant Mill thought experiment?
> Consciousness isn't something we can see like a glowing orb. We can only
> ever infer it from clues of behavior.
>
> Given that, Ilis there any behavior that a machine could demonstrate that
> would convince you it is conscious? If so, what is that behavior?
>

Good question. In the old days, people made a distinction between strong AI
and weak AI, where a strong AI was taken to mean conscious like a human and
a weak AI was taken to mean unconscious, but with the appearance of
consciousness.  I have always maintained that weak AI so defined is
possible.

Somewhere along the line, the language changed. Strong AI no longer means
conscious, necessarily, and people with possibly dubious motives
popularized the slippery term "sentient," which according to Webster could
mean conscious or unconscious. We also have the term AGI.

In any case, I think it might be impossible to know the difference from
behavior alone. This means that for people who believe the same, we have no
recourse but theoretical arguments which is the sort of thing we do here on
ExI.

As we discussed, chess apps can develop what seem to us remarkable and
>> novel strategies. We might call them emergent properties, but they follow
>> logically from the simple rules of chess. Does that make them conscious,
>> too?
>>
>
> I don't think the strategies imply consciousness. I think consciousness is
> implied by something much simpler: it's demonstrated awareness of certain
> information.
>
> For example, by playing chess I think it demonstrates that something
> within that chess playing system exists an awareness (consciousness) of the
> chess board and layout of the pieces.
>

Glad I asked you that question and I am surprised as a week or two ago you
agreed with me that consciousness entailed the capacity to hold something
consciously in mind. I doubt you really believe that about chess software.
Or do you?


As I wrote, I would actually call it a miracle as it would mean that the
>> LLM invented the word "I" out of nothing, never having seen it or anything
>> like in text. I am not sure what Sutskever's answer would be to my question
>> about that problem, and it could be that I don't fully understand his
>> thought experiment. I am paraphrasing Altman who was paraphrasing Sutskever.
>>
>
>
> I don't think it would use the word "I", but I think it could come up with
> a third person reflexive description of itself, e.g. as "that process which
> generates the responses that appear between the prompts.
>

That's an interesting thought.

I think you should watch this clip:
>
>
> https://twitter.com/bio_bootloader/status/1640512444958396416?t=MlTHZ1r7aYYpK0OhS16bzg&s=19
>
> If you disagree with him, could you explain why and how he is wrong?
>

That is Sutskever, as I suppose you know.

"what does it mean to predict the next token well enough? ... it means that
you understand the underlying reality that led to the creation of that
token"

Do I agree with that? It depends on what he means by understanding and I
gather that he is not thinking in terms of conscious understanding, which
is to me the important question. Lots of extremely complex and what I would
call intelligent behavior happens unconsciously in the world. The human
immune system is amazing, for example, but I doubt it knows consciously
what it is doing.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230407/40344463/attachment.htm>


More information about the extropy-chat mailing list