[ExI] Zombies

Brent Allsop brent.allsop at gmail.com
Wed May 3 00:21:28 UTC 2023


Hi Gordon,
I have a question for you.  Jason seems to understand everyone's views.
I initially thought your way of thinking was similar to mine, but from what
Jason is saying, maybe not.
He said you believe: "LLMs have no understanding whatsoever, only
"understanding" -- an appearance of understanding."

To me, having an "appearance of understanding" includes the ability to be
indistinguishable from a human, intelligence wise.
To me the only things abstract systems can't do is know what subjective
qualities are like.
But Jason seems to be saying this doesn't matter, and that you think there
is something different from this which LLMs can't do.
So my question to Gordon is, is this true?  Is there something other than
knowing what qualities are like, which LLMs can't do?
And if so, what is that?

On Mon, May 1, 2023 at 8:58 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Mon, May 1, 2023, 6:18 PM Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> Hi Jason,
>>
>>
>> On Mon, May 1, 2023 at 1:39 PM Jason Resch via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>>> As I see it, the two camps are:
>>>> 1. Those who believe behaving in every discernable way as if one
>>>> understands is different from genuine understanding.
>>>> 2. Those who believe behaving in every discernable way as if one
>>>> understands is no different from genuine understanding.
>>>>
>>>> As laid out, neither camp is falsifiable, as "in every discernable way"
>>>> covers everything that could be tested for, but the law of parsimony favors
>>>> the second camp, as it has only one notion of "understanding", one defined
>>>> by behavior, rather than postulating the existence of another form of
>>>> "genuine understanding", different from "behaviors if one understands", and
>>>> it is a form which cannot be tested for using any objective means.
>>>>
>>>
>> By "genuine understanding", I'm assuming you are talking about something
>> like it has an ability to experience a redness quality, so can say: 'oh
>> THAT is what redness is like.
>>
>
> I was more talking about LLMs vs. human brains. Gordon said that human
> brains had true or genuine understanding, whereas LLMs have no
> understanding whatsoever, only "understanding" -- an appearance of
> understanding. I don't know what camp 1 means by genuine understanding.
> Gordon seemed to believe it involves consciousness, in which case the
> debate on genuine understanding collapses into the zombies are possible vs.
> zombies are impossible debate.
>
>
>
>> And, if they discovered which of all our descriptions of stuff in the
>> brain was a description of that redness, and if they could reliably
>> demonstrate that to anyone, as we start repairing and doing significant
>> engineering work on the subjective consciousness, (doing things like
>> endowing people with new colorness qualities nobody has ever experienced
>> before)
>>
>
> We've done that to monkeys already. Did you read that paper?
>
> would that not force everyone in the number 2 camp to admit their camp has
>> been falsified?
>>
>
> I don't think the qualia question is necessarily relevant to the question
> of whether there is a form of understanding which exists but cannot be
> detected, although I do see a parallel with qualia: qualia being something
> that exists and which some people argue cannot be detected (they believe
> zombies are possible).
>
> Jason
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230502/4cc58284/attachment.htm>


More information about the extropy-chat mailing list