[ExI] Zombies

Ben Zaiboc ben at zaiboc.net
Mon May 1 16:19:14 UTC 2023


Gordon Swobe wrote:

 > The mere fact that an LLM can be programmed/conditioned by its 
developers to say it is or is not conscious should be evidence that it 
is not.

The fact that you can say this is evidence that you are letting your 
prejudice prevent you from thinking logically. If the above is true, 
then the same argument can be applied to humans (just replace 
'developers' with 'parents' or 'peers', or 'environment', etc.).


 > Nobody wants to face the fact that the founders of OpenAI themselves 
insist that the only proper test of consciousness in an LLM would 
require that it be trained on material devoid of references to first 
person experience. It is only because of that material in training 
corpus that LLMs can write so convincingly in the first person that they 
appear as conscious individuals and not merely as very capable 
calculators and language processors.

So they are proposing a test for consciousness. Ok. A test that nobody 
is going to do, or probaby can do.

This proves nothing. Is this lack of evidence your basis for insisting 
that they cannot be conscious? Not long ago, it was your understanding 
that all they do is statisics on words.

Again, note that I don't actually have a position on whether they are 
conscious or not, or even whether they understand what they are saying. 
My position is that they may be, or may do. I'm not insisting one way or 
the other, but saying we can't rule it out. It is interesting, though, 
and suggestive, that. as many people now have pointed out many times 
now, the evidence is pointing in a certain direction. There's certainly 
no evidence that we can rule it out.

Correct me if I'm wrong, but you go much further than this, and insist 
that no non-biological machines can ever be conscious or have deep 
understanding of what they say or do. Is this right?

That goes way beyond LLMs, of course. and is really another discussion 
altogether.

But if it is true,then why are you leaning so heavily on the 'they are 
only doing statistics on words' argument? Surely claiming that they 
can't have understanding or consciousness /because they are 
non-biological/ would be more relevant? (or are you just holding this in 
reserve for when the 'statistics!' one falls over entirely, or becomes 
irrelevant?)

Ben


More information about the extropy-chat mailing list