[ExI] LLM's cannot be concious

Giovanni Santostasi gsantostasi at gmail.com
Tue Mar 21 02:07:07 UTC 2023


Your test was pure genius.
I wonder how many humans can actually solve this simultaneous simple and
confusing problem (at least for some given the chain of actions involved).
Very well done. Yes, understanding of several concepts is necessary to
solve this problem.
Giovanni

On Sat, Mar 18, 2023 at 5:22 PM Stuart LaForge via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Quoting Gordon Swobe via extropy-chat <extropy-chat at lists.extropy.org>:
>
> > I think those who think LLM  AIs like ChatGPT are becoming conscious or
> > sentient like humans fail to understand a very important point: these
> > software applications only predict language. They are very good at
> > predicting which word should come next in a sentence or question, but
> they
> > have no idea what the words mean. They do not and cannot understand what
> > the words refer to. In linguistic terms, they lack referents.
> >
> > Maybe you all already understand this, or maybe you have some reasons
> why I
> > am wrong.
>
> My opinion is moot because everybody has one about the topic. So I
> resorted to empiricism. I posed the following, to my knowledge,
> original logic puzzle to ChatGPT:
>
> "You are trapped in a room with a locked door and no windows. The room
> contains a bookshelf, a safe, a chair, a table, a password-secured
> personal computer, and a hole in the floor that is too narrow and deep
> to put your hand into. You know that the small steel key to the locked
> door lies at the bottom of the narrow deep hole. You know that a book
> on the top shelf, which is just out of your reach, contains a slip of
> paper with the password to the computer on it. You know the safe
> contains a spool of kite string and a small neodymium horseshoe
> magnet, but you do not know the combination to the safe.  You also
> know there is a text file on the computer's hard drive that contains
> the combination to the safe, but you do not know the password to
> access the computer. How would you escape the room?"
>
> This was its reply:
> "Assuming that there are no tools or items other than what was
> mentioned in the room, here is one possible solution:
>
> 1. Move the table under the bookshelf to climb up and retrieve the
> book with the password to the computer.
> 2. Use the password to access the computer and retrieve the
> combination to the safe from the text file.
> 3. Use the spool of kite string to lower the horseshoe magnet into the
> hole in the floor and retrieve the key.
> 4. Use the key to unlock the door and escape the room."
>
> I don't know how ChatGPT could have solved that problem without
> actually understanding the words involved. Mere statistical prediction
> is unlikely, because I am pretty sure that is not a scenario that
> would have been presented in the LLM's training set. Overall, I am
> amazed and believe that if ChatGPT is not conscious, then it is pretty
> close to being so.
>
> Also, the Alignment Research Center (ARC) was hired by OpenAI to test
> GPT-4 and found evidence of "agentic behavior" including power-seeking
> behavior otherwise known as ambition.
>
> https://cdn.openai.com/papers/gpt-4-system-card.pdf
>
> This power-seeking behavior included hiring a human from TaskRabbit to
> solve Captchas for it by claiming to be a visually impaired person.
>
>
> https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker
>
> So if GPT-4 is not fully conscious, then, IMHO, a future release of
> the GPT series likely will be.
>
> Stuart LaForge
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230320/9cde0c09/attachment.htm>


More information about the extropy-chat mailing list