<div dir="ltr"><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:#000000"><br style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small">Again, note that I don't actually have a position on whether they are</span><br style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small">conscious or not, or even whether they understand what they are saying.</span><br style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small">My position is that they may be, or may do. I'm not insisting one way or</span><br style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small">the other, but saying we can't rule it out. It is interesting, though,</span><br style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small">and suggestive, that. as many people now have pointed out many times</span><br style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small">now, the evidence is pointing in a certain direction. There's certainly</span><br style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small">no evidence that we can rule it out.</span><br style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small"></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:#000000"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small"><br></span></div><div class="gmail_default" style="font-size:large;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-size:small"><font face="comic sans ms, sans-serif">How, just how, tell me, can you do a test for something when you have not explicated its characteristics? Definition, please. Is there evidence? BAsed on what assumptions? This whole discussion is spinning its wheels waiting for the traction of definitions, which it seems everybody is willing to give, but not in a form which can be tested. bill w</font></span></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, May 1, 2023 at 11:21 AM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Gordon Swobe wrote:<br>
<br>
> The mere fact that an LLM can be programmed/conditioned by its <br>
developers to say it is or is not conscious should be evidence that it <br>
is not.<br>
<br>
The fact that you can say this is evidence that you are letting your <br>
prejudice prevent you from thinking logically. If the above is true, <br>
then the same argument can be applied to humans (just replace <br>
'developers' with 'parents' or 'peers', or 'environment', etc.).<br>
<br>
<br>
> Nobody wants to face the fact that the founders of OpenAI themselves <br>
insist that the only proper test of consciousness in an LLM would <br>
require that it be trained on material devoid of references to first <br>
person experience. It is only because of that material in training <br>
corpus that LLMs can write so convincingly in the first person that they <br>
appear as conscious individuals and not merely as very capable <br>
calculators and language processors.<br>
<br>
So they are proposing a test for consciousness. Ok. A test that nobody <br>
is going to do, or probaby can do.<br>
<br>
This proves nothing. Is this lack of evidence your basis for insisting <br>
that they cannot be conscious? Not long ago, it was your understanding <br>
that all they do is statisics on words.<br>
<br>
Again, note that I don't actually have a position on whether they are <br>
conscious or not, or even whether they understand what they are saying. <br>
My position is that they may be, or may do. I'm not insisting one way or <br>
the other, but saying we can't rule it out. It is interesting, though, <br>
and suggestive, that. as many people now have pointed out many times <br>
now, the evidence is pointing in a certain direction. There's certainly <br>
no evidence that we can rule it out.<br>
<br>
Correct me if I'm wrong, but you go much further than this, and insist <br>
that no non-biological machines can ever be conscious or have deep <br>
understanding of what they say or do. Is this right?<br>
<br>
That goes way beyond LLMs, of course. and is really another discussion <br>
altogether.<br>
<br>
But if it is true,then why are you leaning so heavily on the 'they are <br>
only doing statistics on words' argument? Surely claiming that they <br>
can't have understanding or consciousness /because they are <br>
non-biological/ would be more relevant? (or are you just holding this in <br>
reserve for when the 'statistics!' one falls over entirely, or becomes <br>
irrelevant?)<br>
<br>
Ben<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>