[ExI] Bard (i.e. LaMDA) admits it isn't sentient.
Jason Resch
jasonresch at gmail.com
Wed Apr 5 22:12:46 UTC 2023
On Wed, Apr 5, 2023, 5:42 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:
> On Wed, Apr 5, 2023 at 2:44 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> A better test in my view, and one easier to perform is to provide it a
>> training set stripped of philosophy of mind texts and see if it is able to
>> generate any content related to topics in that field. This was proposed
>> here:
>>
>> https://philpapers.org/rec/ARGMAA-2
>> https://philpapers.org/archive/ARGMAA-2.pdf
>> “Experimental Methods for Unraveling the Mind–Body Problem: The
>> Phenomenal Judgment Approach”
>>
>> “In 2014, Victor Argonov suggested a non-Turing test for machine
>> consciousness based on machine's ability to produce philosophical
>> judgments.[40] He argues that a deterministic machine must be regarded as
>> conscious if it is able to produce judgments on all problematic properties
>> of consciousness (such as qualia or binding) having no innate (preloaded)
>> philosophical knowledge on these issues, no philosophical discussions while
>> learning, and no informational models of other creatures in its memory
>> (such models may implicitly or explicitly contain knowledge about these
>> creatures’ consciousness). However, this test can be used only to detect,
>> but not refute the existence of consciousness. A positive result proves
>> that machine is conscious but a negative result proves nothing. For
>> example, absence of philosophical judgments may be caused by lack of the
>> machine’s intellect, not by absence of consciousness.”
>>
>> In my interaction with the fake LaMDA, LaMDA was able to come up with
>> novel terms and ideas in philosophy of mind, such as "supermetacognition"
>> and it also designed a a set of questions to test entities for the trait of
>> supermetacognition. Since this is a term not found in any philosophy paper
>> I've found, nor is the test it developed for it, I would judge it as having
>> passed:
>>
>> https://photos.app.goo.gl/osskvbe4fYpbK5uZ9
>>
>
> Wow that dialogue you had with the fake LaMDA is pretty wild!
>
Yes. It gave me the distinct impression that I was communicating with a
superior intelligence. I grilled it on many deep philosophical problems,
problems on which philosophers hold differing perspectives, and I found
that in nearly all cases it gave answers superior to one's I could have
given.
But I would not judge it as having passed anything. First, I doubt it meets
> the requirement of "having no innate (preloaded) philosophical knowledge on
> these issues, no philosophical discussions while learning, and no
> informational models of other creatures in its memory (such models may
> implicitly or explicitly contain knowledge about these creatures’
> consciousness)."
>
But where did it pull the term "Suprametacognitive" from? A Google search
of that term came up empty.
https://www.google.com/search?q=%22Suprametacognitive%22
Or the idea for a "Suprametacognitive Turing test" as well as entirely
novel questions to use in this test? Doesn't it need a theory of mind to
come up with the questions to test for the presence of another mind having
a similar degree of understanding? Can we not, from this, conclude that it
is generating novel results in philosophy of mind?
And even if it does, I think it is just making stuff up. In case you
> haven't heard, LLMs hallucinate all sorts of things and this is a major
> problem.
>
If an AI in Ilya Sutskever's test refers to itself in the first person,
will you retreat to saying "it is just hallucinating" ?
Also: don't you have to be conscious to suffer a hallucination?
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230405/fcce1167/attachment.htm>
More information about the extropy-chat
mailing list