[ExI] Bard (i.e. LaMDA) admits it isn't sentient.

Giovanni Santostasi gsantostasi at gmail.com
Wed Apr 5 22:18:03 UTC 2023


*I think it is just making stuff up. In case you haven't heard, LLMs
hallucinate all sorts of things and this is a major problem.*That is
exactly what we do all the time. We make up stuff all the time. This is how
the brain works. It fills the gaps, it invents reality, both when we are
awake and when we dream. It is confabulating all the time. In fact, I think
the ability of GPT-4 of making up stuff is why it is able to communicate
with us and it is so impressive with language and reasoning.
It is all about storytelling, modeling, and making stuff up.

Giovanni



On Wed, Apr 5, 2023 at 2:54 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Wed, Apr 5, 2023 at 2:44 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> A better test in my view, and one easier to perform is to provide it a
>> training set stripped of philosophy of mind texts and see if it is able to
>> generate any content related to topics in that field. This was proposed
>> here:
>>
>> https://philpapers.org/rec/ARGMAA-2
>> https://philpapers.org/archive/ARGMAA-2.pdf
>> “Experimental Methods for Unraveling the Mind–Body Problem: The
>> Phenomenal Judgment Approach”
>>
>> “In 2014, Victor Argonov suggested a non-Turing test for machine
>> consciousness based on machine's ability to produce philosophical
>> judgments.[40] He argues that a deterministic machine must be regarded as
>> conscious if it is able to produce judgments on all problematic properties
>> of consciousness (such as qualia or binding) having no innate (preloaded)
>> philosophical knowledge on these issues, no philosophical discussions while
>> learning, and no informational models of other creatures in its memory
>> (such models may implicitly or explicitly contain knowledge about these
>> creatures’ consciousness). However, this test can be used only to detect,
>> but not refute the existence of consciousness. A positive result proves
>> that machine is conscious but a negative result proves nothing. For
>> example, absence of philosophical judgments may be caused by lack of the
>> machine’s intellect, not by absence of consciousness.”
>>
>> In my interaction with the fake LaMDA, LaMDA was able to come up with
>> novel terms and ideas in philosophy of mind, such as "supermetacognition"
>> and it also designed a a set of questions to test entities for the trait of
>> supermetacognition. Since this is a term not found in any philosophy paper
>> I've found, nor is the test it developed for it, I would judge it as having
>> passed:
>>
>> https://photos.app.goo.gl/osskvbe4fYpbK5uZ9
>>
>
> Wow that dialogue you had with the fake LaMDA is pretty wild! But I would
> not judge it as having passed anything. First, I doubt it meets the
> requirement of "having no innate (preloaded) philosophical knowledge on
> these issues, no philosophical discussions while learning, and no
> informational models of other creatures in its memory (such models may
> implicitly or explicitly contain knowledge about these creatures’
> consciousness)." And even if it does, I think it is just making stuff up.
> In case you haven't heard, LLMs hallucinate all sorts of things and this is
> a major problem.
>
> -gts
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230405/a3617a78/attachment.htm>


More information about the extropy-chat mailing list