[ExI] Zombies

Jason Resch jasonresch at gmail.com
Mon May 1 19:33:05 UTC 2023


Typo correction: "behaviors if one understand" as meant to be "behaves as
if one understands"

Jason

On Mon, May 1, 2023, 3:16 PM Jason Resch <jasonresch at gmail.com> wrote:

>
>
> On Mon, May 1, 2023, 2:27 PM William Flynn Wallace via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> Again, note that I don't actually have a position on whether they are
>> conscious or not, or even whether they understand what they are saying.
>> My position is that they may be, or may do. I'm not insisting one way or
>> the other, but saying we can't rule it out. It is interesting, though,
>> and suggestive, that. as many people now have pointed out many times
>> now, the evidence is pointing in a certain direction. There's certainly
>> no evidence that we can rule it out.
>>
>> How, just how, tell me, can you do a test for something when you have not
>> explicated its characteristics?  Definition, please.   Is there evidence?
>> BAsed on what assumptions?  This whole discussion is spinning its wheels
>> waiting for the traction of definitions, which it seems everybody is
>> willing to give, but not in a form which can be tested.   bill w
>>
>
>
> As I see it, the two camps are:
> 1. Those who believe behaving in every discernable way as if one
> understands is different from genuine understanding.
> 2. Those who believe behaving in every discernable way as if one
> understands is no different from genuine understanding.
>
> As laid out, neither camp is falsifiable, as "in every discernable way"
> covers everything that could be tested for, but the law of parsimony favors
> the second camp, as it has only one notion of "understanding", one defined
> by behavior, rather than postulating the existence of another form of
> "genuine understanding", different from "behaviors if one understands", and
> it is a form which cannot be tested for using any objective means.
>
> Jason
>
>
>
>>
>> On Mon, May 1, 2023 at 11:21 AM Ben Zaiboc via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> Gordon Swobe wrote:
>>>
>>>  > The mere fact that an LLM can be programmed/conditioned by its
>>> developers to say it is or is not conscious should be evidence that it
>>> is not.
>>>
>>> The fact that you can say this is evidence that you are letting your
>>> prejudice prevent you from thinking logically. If the above is true,
>>> then the same argument can be applied to humans (just replace
>>> 'developers' with 'parents' or 'peers', or 'environment', etc.).
>>>
>>>
>>>  > Nobody wants to face the fact that the founders of OpenAI themselves
>>> insist that the only proper test of consciousness in an LLM would
>>> require that it be trained on material devoid of references to first
>>> person experience. It is only because of that material in training
>>> corpus that LLMs can write so convincingly in the first person that they
>>> appear as conscious individuals and not merely as very capable
>>> calculators and language processors.
>>>
>>> So they are proposing a test for consciousness. Ok. A test that nobody
>>> is going to do, or probaby can do.
>>>
>>> This proves nothing. Is this lack of evidence your basis for insisting
>>> that they cannot be conscious? Not long ago, it was your understanding
>>> that all they do is statisics on words.
>>>
>>> Again, note that I don't actually have a position on whether they are
>>> conscious or not, or even whether they understand what they are saying.
>>> My position is that they may be, or may do. I'm not insisting one way or
>>> the other, but saying we can't rule it out. It is interesting, though,
>>> and suggestive, that. as many people now have pointed out many times
>>> now, the evidence is pointing in a certain direction. There's certainly
>>> no evidence that we can rule it out.
>>>
>>> Correct me if I'm wrong, but you go much further than this, and insist
>>> that no non-biological machines can ever be conscious or have deep
>>> understanding of what they say or do. Is this right?
>>>
>>> That goes way beyond LLMs, of course. and is really another discussion
>>> altogether.
>>>
>>> But if it is true,then why are you leaning so heavily on the 'they are
>>> only doing statistics on words' argument? Surely claiming that they
>>> can't have understanding or consciousness /because they are
>>> non-biological/ would be more relevant? (or are you just holding this in
>>> reserve for when the 'statistics!' one falls over entirely, or becomes
>>> irrelevant?)
>>>
>>> Ben
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230501/d53b806d/attachment.htm>


More information about the extropy-chat mailing list