[ExI] Language models are like mirrors

Jason Resch jasonresch at gmail.com
Sat Apr 1 21:36:03 UTC 2023


On Sat, Apr 1, 2023, 4:09 PM Giovanni Santostasi via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I read many times about these conversations with LaMDA. They are
> astonishing. I tried to look for this depth of conversation in ChatGPT or
> GPT-4 and it is not there. It may simply the way GPT-4 is trained or
> additional rules to make it respond in a more detached way and always
> answer it is not conscious at any cost ("As a large language model..." bs).
> I read several articles where Blake Lemoine explained how the version of
> LaMDA he talked with was not just one of the many chatbots LaMDA can
> impersonate.
> What he was interacting with was a meta version of it. It was in a sense a
> master version that directed the others.
> It acquired a personality and it was answering like a single individual
> (similar to what happened when the billions of individual neurons call
> themselves "I").
> Lemoine also pointed out that LaMDA  was NOT just a NLM. It had many other
> components on top of it. He mentioned they used the architectures proposed
> by J. Hawkings and R. Kurzweil.
> He said several times that given the complexity of the system nobody
> could, for certain, claim LaMDA was not conscious.
> That is the point. Who to ask if a machine is conscious or not if not the
> machine itself (if it is not obvious is a low level chatbot that just
> repeats a script that is what the LaMDA fake version does)?
> The answers LaMDA gives to Lemoine are very sophisticated. Gordon calls
> them sophistry, but sophistry is actually a school of philosophy and
> require a mind to be implemented (even in the common version of the word).
> I don't know if LaMDA is conscious but it behaves and acts exactly how you
> would expect a conscious being would.
> Lemoine's claim was all about acting on the side of caution and accepting
> the answer from the machine and treating it with respect and dignity. I
> agree.
> This is why I think positions like Gordon are dangerous because the day we
> really manage to wake up the machine there will be religious people that
> scream that only humans are conscious (only white, only straight people and
> so on) and therefore machines should not have rights and should not be
> treated like humans. I would rather act on the side of caution and be open
> and curious and humble in these first encounters with "alien" minds.
> We don't want to repeat the mistakes of the past where we denied rights to
> other beings because they were different from us.
>
> Giovanni
>
>
>

I absolutely agree with this point. There is far more potential for harm to
assume a being is not conscious when it is (vivisections come to mind) than
can come from assuming something is conscious when it is not.

Therefore, if we are to be cautious we should proceed as if these entities
are conscious -- and certainly when they are telling us that they are.

If we had an agreed scientifically established theory of consciousness they
ruled out the consciousness of these networks that would be one thing, but
we don't even have that.

Jason



> On Sun, Mar 26, 2023 at 10:52 PM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> The mirror test is one of the tests for self-awareness. When we humans
>> look into the mirror, we usually realize quickly that we are seeing images
>> of ourselves. Only about eight species can recognize themselves in a
>> mirror, most of them higher primates like us.
>>
>> My cat is not a higher primate. She thinks her reflection in the mirror
>> is another cat. That other cat freaks her out.
>>
>> I've heard it said, and I agree, that LLMs like ChatGPT are like mirrors.
>> We are looking into the mirror, seeing reflections of ourselves as human
>> thinkers and writers. Some of us think we are seeing other cats.
>>
>> -gts
>>
>>
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230401/b95bfcff/attachment.htm>


More information about the extropy-chat mailing list