[ExI] Bard (i.e. LaMDA) admits it isn't sentient.

Jason Resch jasonresch at gmail.com
Thu Apr 6 11:48:47 UTC 2023


On Thu, Apr 6, 2023, 12:11 AM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> On Wed, Apr 5, 2023 at 4:39 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> > But where did it pull the term "Suprametacognitive" from?
>
> Assuming it wasn't planted by a developer in the code along with all
> that Hinduish language (where did *that* come from if not from a
> developer?), it probably just made it up, combining the three words to coin
> a fourth.
>

That's quite possible. But the import point is less the word so much as its
own understanding of the word that it made up. Where did it get THAT
information from? It can't be from ite training data.

But I don't see that as evidence of consciousness. Intelligence, yes,
> consciousness, no.
>

Are you familiar with Leibniz's Giant Mill thought experiment?
Consciousness isn't something we can see like a glowing orb. We can only
ever infer it from clues of behavior.

Given that, Ilis there any behavior that a machine could demonstrate that
would convince you it is conscious? If so, what is that behavior?



> But given the consistently eastern flavor of the "religion" it espouses, I
> strongly suspect it was steered in that direction by the developer.
>

Other possibilities include: my prompting, or it's understanding from
reading the internet.


> As we discussed, chess apps can develop what seem to us remarkable and
> novel strategies. We might call them emergent properties, but they follow
> logically from the simple rules of chess. Does that make them conscious,
> too?
>

I don't think the strategies imply consciousness. I think consciousness is
implied by something much simpler: it's demonstrated awareness of certain
information.

For example, by playing chess I think it demonstrates that something within
that chess playing system exists an awareness (consciousness) of the chess
board and layout of the pieces.


> If an AI in Ilya Sutskever's test refers to itself in the first person,
>> will you retreat to saying "it is just hallucinating" ?
>>
>
> As I wrote, I would actually call it a miracle as it would mean that the
> LLM invented the word "I" out of nothing, never having seen it or anything
> like in text. I am not sure what Sutskever's answer would be to my question
> about that problem, and it could be that I don't fully understand his
> thought experiment. I am paraphrasing Altman who was paraphrasing Sutskever.
>


I don't think it would use the word "I", but I think it could come up with
a third person reflexive description of itself, e.g. as "that process which
generates the responses that appear between the prompts.


> Also: don't you have to be conscious to suffer a hallucination?
>>
>
> Not in the sense meant here with LLMs. It is in the nature of their
> architecture that they make stuff up. As I've written many times, they are
> like sophists. They literally do not know the meanings of the words they
> generate and so they have no interest in or knowledge of the truth values
> of the sentences and paragraphs they generate. They are programmed only to
> guess which words will be most sensible to us based on how those words
> appear statistically in the material on which they were trained, and
> sometimes they make bad guesses.
>

I think you should watch this clip:

https://twitter.com/bio_bootloader/status/1640512444958396416?t=MlTHZ1r7aYYpK0OhS16bzg&s=19

If you disagree with him, could you explain why and how he is wrong?

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230406/dadb4c7a/attachment.htm>


More information about the extropy-chat mailing list