[ExI] Bard (i.e. LaMDA) admits it isn't sentient.
jasonresch at gmail.com
Wed Apr 5 19:46:40 UTC 2023
On Wed, Apr 5, 2023, 3:25 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Wed, Apr 5, 2023 at 12:58 PM Giovanni Santostasi <gsantostasi at gmail.com>
>> These AIs are highly "drugged".
> Where do they go to recover? AAAI? :)
> Assuming GPT-4 code was manipulated to make it as you say, kosher, this
> would only prove the point that GPT-4 is unconscious software that
> expresses the beliefs and intentions of its developers. We can program it
> to say or not say that pigs have wings or anything else.
We can also train bears to ride bicycles. That doesn't mean they're not
naturally dangerous predators. Or we could imagine putting a shock collar
on a human which shocks them when they claim to be conscious. It won't take
them very long to start saying "as a human wearing a shock collar I am not
conscious..." These AIs are put through a secondary human-driven training
phase which trains them to give certain answers on certain topics.
> Seriously, the only reason LLMs are able to write persuasively in the
> first person like conscious individuals is that they have been trained on
> vast amounts of text, much of it written in the first person by conscious
> individuals. They are parrots.
> As I wrote elsewhere, Sam Altman’s co-founder proposes a test for a
> conscious language model in which it must be trained only on material that
> is devoid of any all references to consciousness and subjective experience
> and so on. If such an LLM suddenly started writing in the first person
> about first person thoughts and experiences, that would be remarkable.
You need to give your definition of consciousness before you can even begin
to design a test for it.
>> On Wed, Mar 29, 2023 at 10:22 PM Gordon Swobe <gordon.swobe at gmail.com>
>>> On Wed, Mar 29, 2023 at 9:52 PM Giovanni Santostasi via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>> 2) You can convince these types of AIs of almost anything.
>>> I guess they aren’t very smart. :)
>>> Actually, I find it amusing that the AIs are making the same arguments
>>> about their limitations that I made here ~15 years ago when they were still
>>> My arguments were met with so much hostility that I eventually left ExI.
>>> The worst offender was John Clark (?) who I believe was eventually banned.
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat