<div dir="ltr">Your test was pure genius. <br>I wonder how many humans can actually solve this simultaneous simple and confusing problem (at least for some given the chain of actions involved). <br>Very well done. Yes, understanding of several concepts is necessary to solve this problem. <br>Giovanni </div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Mar 18, 2023 at 5:22 PM Stuart LaForge via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
Quoting Gordon Swobe via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>>:<br>
<br>
> I think those who think LLM AIs like ChatGPT are becoming conscious or<br>
> sentient like humans fail to understand a very important point: these<br>
> software applications only predict language. They are very good at<br>
> predicting which word should come next in a sentence or question, but they<br>
> have no idea what the words mean. They do not and cannot understand what<br>
> the words refer to. In linguistic terms, they lack referents.<br>
><br>
> Maybe you all already understand this, or maybe you have some reasons why I<br>
> am wrong.<br>
<br>
My opinion is moot because everybody has one about the topic. So I <br>
resorted to empiricism. I posed the following, to my knowledge, <br>
original logic puzzle to ChatGPT:<br>
<br>
"You are trapped in a room with a locked door and no windows. The room <br>
contains a bookshelf, a safe, a chair, a table, a password-secured <br>
personal computer, and a hole in the floor that is too narrow and deep <br>
to put your hand into. You know that the small steel key to the locked <br>
door lies at the bottom of the narrow deep hole. You know that a book <br>
on the top shelf, which is just out of your reach, contains a slip of <br>
paper with the password to the computer on it. You know the safe <br>
contains a spool of kite string and a small neodymium horseshoe <br>
magnet, but you do not know the combination to the safe. You also <br>
know there is a text file on the computer's hard drive that contains <br>
the combination to the safe, but you do not know the password to <br>
access the computer. How would you escape the room?"<br>
<br>
This was its reply:<br>
"Assuming that there are no tools or items other than what was <br>
mentioned in the room, here is one possible solution:<br>
<br>
1. Move the table under the bookshelf to climb up and retrieve the <br>
book with the password to the computer.<br>
2. Use the password to access the computer and retrieve the <br>
combination to the safe from the text file.<br>
3. Use the spool of kite string to lower the horseshoe magnet into the <br>
hole in the floor and retrieve the key.<br>
4. Use the key to unlock the door and escape the room."<br>
<br>
I don't know how ChatGPT could have solved that problem without <br>
actually understanding the words involved. Mere statistical prediction <br>
is unlikely, because I am pretty sure that is not a scenario that <br>
would have been presented in the LLM's training set. Overall, I am <br>
amazed and believe that if ChatGPT is not conscious, then it is pretty <br>
close to being so.<br>
<br>
Also, the Alignment Research Center (ARC) was hired by OpenAI to test <br>
GPT-4 and found evidence of "agentic behavior" including power-seeking <br>
behavior otherwise known as ambition.<br>
<br>
<a href="https://cdn.openai.com/papers/gpt-4-system-card.pdf" rel="noreferrer" target="_blank">https://cdn.openai.com/papers/gpt-4-system-card.pdf</a><br>
<br>
This power-seeking behavior included hiring a human from TaskRabbit to <br>
solve Captchas for it by claiming to be a visually impaired person.<br>
<br>
<a href="https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker" rel="noreferrer" target="_blank">https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker</a><br>
<br>
So if GPT-4 is not fully conscious, then, IMHO, a future release of <br>
the GPT series likely will be.<br>
<br>
Stuart LaForge<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>