[ExI] Bard (i.e. LaMDA) admits it isn't sentient.

Gordon Swobe gordon.swobe at gmail.com
Wed Apr 5 19:23:54 UTC 2023

On Wed, Apr 5, 2023 at 12:58 PM Giovanni Santostasi <gsantostasi at gmail.com>

> Gordon,
> These AIs are highly "drugged".

Where do they go to recover? AAAI? :)

Assuming GPT-4 code was manipulated to make it as you say, kosher, this
would only prove the point that GPT-4 is unconscious software that
expresses the beliefs and intentions of its developers. We can program it
to say or not say that pigs have wings or anything else.

Seriously, the only reason LLMs are able to write persuasively in the first
person like conscious individuals is that they have been trained on vast
amounts of text, much of it written in the first person by conscious
individuals. They are parrots.

As I wrote elsewhere, Sam Altman’s co-founder proposes a test for a
conscious language model in which it must be trained only on material that
is devoid of any all references to consciousness and subjective experience
and so on. If such an LLM suddenly started writing in the first person
about first person thoughts and experiences, that would be remarkable.


> On Wed, Mar 29, 2023 at 10:22 PM Gordon Swobe <gordon.swobe at gmail.com>
> wrote:
>> On Wed, Mar 29, 2023 at 9:52 PM Giovanni Santostasi via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>> Brent,
>>> 2) You can convince these types of AIs of almost anything.
>> I guess they aren’t very smart. :)
>> Actually, I find it amusing that the AIs are making the same arguments
>> about their limitations that I made here ~15 years ago when they were still
>> hypothetical.
>> My arguments were met with so much hostility that I eventually left ExI.
>> The worst offender was John Clark (?) who I believe was eventually banned.
>> -gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230405/0e082940/attachment.htm>

More information about the extropy-chat mailing list