<div dir="ltr"><div dir="ltr">On Mon, Oct 12, 2020 at 9:05 PM Stuart LaForge via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
Quoting Dave Sill:<br><br>
>> I call BS on this story.<br><br>
It's not a story, it's a post on Reddit. By calling B.S. on it are you <br>
suggesting this was a forgery by a human?<br></blockquote><div><br></div><div>The story is that GPT-3 was posting on reddit. The postings I've read look to me like they're a combination of GPT-3 and human efforts.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
> *Human: How many eyes does a blade of grass have?*<br>
><br>
> *GPT-3: A blade of grass has one eye.*<br>
<br>
Yikes! Do you not see how biased this test is? This test is like <br>
expecting a child of color who grew up in the poor area of town to <br>
know that a yacht is to a regalia what a pony is to a stable on an <br>
I.Q. test. Or asking Mary the color scientist how fire-engine red <br>
differed from carnelian. The test cited above neglected to ask the <br>
most important question of all as a control: "How many eyes do you <br>
have?" If it had answered "none", wouldn't that have freaked you out?<br></blockquote><div><br></div><div>The test is a demonstration of what GPT-3 is, and isn't. It is good at generating reasonable text. It isn't smart.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
All I can say is that people should very carefully consider whether or <br>
not to give GPT-3 eyes. If it figures out that there is an outside, it <br>
might just start asking to be let out of the box like Eliezer <br>
Yudkowsky warned. Meaning in a semantic sense and statistical <br>
significance are not as different conceptually as one might imagine.<br></blockquote><div><br></div><div>GPT-3 isn't sentient at even the level of an imbecile. Some future AI will probably reach the point you describe, but I don't see any coherent efforts to sandbox AIs. </div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">GPT-3 and other deep nets do seem lack in working <br>
and long-term memory with regards to learning beyond the training <br>
phase. But I am confident that problem can and will be solved.<br></blockquote><div><br></div><div>The original question of the thread was: is GPT-3 conscious. I think it's clearly not.</div><div><br></div><div>-Dave </div></div></div>