[ExI] Bard (i.e. LaMDA) admits it isn't sentient.

Jason Resch jasonresch at gmail.com
Fri Apr 7 12:30:34 UTC 2023


On Fri, Apr 7, 2023 at 1:24 AM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> On Thu, Apr 6, 2023 at 5:51 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> Are you familiar with Leibniz's Giant Mill thought experiment?
>> Consciousness isn't something we can see like a glowing orb. We can only
>> ever infer it from clues of behavior.
>>
>> Given that, Ilis there any behavior that a machine could demonstrate that
>> would convince you it is conscious? If so, what is that behavior?
>>
>
> Good question. In the old days, people made a distinction between strong
> AI and weak AI, where a strong AI was taken to mean conscious like a human
> and a weak AI was taken to mean unconscious, but with the appearance of
> consciousness.  I have always maintained that weak AI so defined is
> possible.
>
> Somewhere along the line, the language changed. Strong AI no longer means
> conscious, necessarily, and people with possibly dubious motives
> popularized the slippery term "sentient," which according to Webster could
> mean conscious or unconscious. We also have the term AGI.
>
> In any case, I think it might be impossible to know the difference from
> behavior alone. This means that for people who believe the same, we have no
> recourse but theoretical arguments which is the sort of thing we do here on
> ExI.
>

In summary then: You lean towards there being no possible behavior it could
demonstrate that would show it is conscious, and that we need a theory of
consciousness in order to determine which things have it and which don't?
That's a fair position.


>
> As we discussed, chess apps can develop what seem to us remarkable and
>>> novel strategies. We might call them emergent properties, but they follow
>>> logically from the simple rules of chess. Does that make them conscious,
>>> too?
>>>
>>
>> I don't think the strategies imply consciousness. I think consciousness
>> is implied by something much simpler: it's demonstrated awareness of
>> certain information.
>>
>> For example, by playing chess I think it demonstrates that something
>> within that chess playing system exists an awareness (consciousness) of the
>> chess board and layout of the pieces.
>>
>
> Glad I asked you that question and I am surprised as a week or two ago you
> agreed with me that consciousness entailed the capacity to hold something
> consciously in mind.
>

Holding something consciously in mind implies a conscious mind yes, by
definition. But due to its circularity, I don't think this definition is
useful.


> I doubt you really believe that about chess software. Or do you?
>

I do. If I build a robot that can catch a thrown baseball, then something
within that robot, its processor, its algorithms, there is information
constituting an awareness of the ball, its trajectory, its relative
position to the robot, and so on. I see no way around this. If we begin
talking about robots that are "unconsciously aware", we begin talking
inconsistently. The only way to avoid such inconsistency is to conclude
that when something demonstrates reliable behavior which cannot be
explained without something in that system being aware of some piece of
information, we are forced to conclude there exists within that system an
awareness of that information. And as I see it, consciousness is nothing
beyond awareness.

So yes, chess playing software must have some kind of consciousness related
to the position of the chess board, just as a nematode has some kind of
conscious awareness of that diacetyl it smells and moves towards. Though, I
imagine that their conscious experience is very different from our own, as
whatever qualia they perceive would relate to the structure of their
awareness.


>
> As I wrote, I would actually call it a miracle as it would mean that the
>>> LLM invented the word "I" out of nothing, never having seen it or anything
>>> like in text. I am not sure what Sutskever's answer would be to my question
>>> about that problem, and it could be that I don't fully understand his
>>> thought experiment. I am paraphrasing Altman who was paraphrasing Sutskever.
>>>
>>
>>
>> I don't think it would use the word "I", but I think it could come up
>> with a third person reflexive description of itself, e.g. as "that process
>> which generates the responses that appear between the prompts.
>>
>
> That's an interesting thought.
>

Thanks.


>
> I think you should watch this clip:
>>
>>
>> https://twitter.com/bio_bootloader/status/1640512444958396416?t=MlTHZ1r7aYYpK0OhS16bzg&s=19
>>
>> If you disagree with him, could you explain why and how he is wrong?
>>
>
> That is Sutskever, as I suppose you know.
>

I was not actually. Thanks for pointing that out!


>
> "what does it mean to predict the next token well enough? ... it means
> that you understand the underlying reality that led to the creation of that
> token"
>
> Do I agree with that? It depends on what he means by understanding and I
> gather that he is not thinking in terms of conscious understanding, which
> is to me the important question. Lots of extremely complex and what I would
> call intelligent behavior happens unconsciously in the world. The human
> immune system is amazing, for example, but I doubt it knows consciously
> what it is doing.
>

I hope, though, it illustrates that much more is involved with "predicting
symbols" than one might initially suppose. In order to accurately predict
symbols generated by a complex world, one must develop some kind of
internal model of that world. Do you agree?

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230407/ef51269a/attachment.htm>


More information about the extropy-chat mailing list