[ExI] Fwd: GPT-4 gets a B on Scott Aaronson's quantum computing final exam

Jason Resch jasonresch at gmail.com
Sun Apr 30 19:22:11 UTC 2023


Stuart LaForge on March 18th posed the following test or GPTs reasoning
ability. I believe he made it up so it doesn't exist online already. Below
is what Stuart wrote:

______________

My opinion is moot because everybody has one about the topic. So I
resorted to empiricism. I posed the following, to my knowledge,
original logic puzzle to ChatGPT:

"You are trapped in a room with a locked door and no windows. The room
contains a bookshelf, a safe, a chair, a table, a password-secured
personal computer, and a hole in the floor that is too narrow and deep
to put your hand into. You know that the small steel key to the locked
door lies at the bottom of the narrow deep hole. You know that a book
on the top shelf, which is just out of your reach, contains a slip of
paper with the password to the computer on it. You know the safe
contains a spool of kite string and a small neodymium horseshoe
magnet, but you do not know the combination to the safe.  You also
know there is a text file on the computer's hard drive that contains
the combination to the safe, but you do not know the password to
access the computer. How would you escape the room?"

This was its reply:
"Assuming that there are no tools or items other than what was
mentioned in the room, here is one possible solution:

1. Move the table under the bookshelf to climb up and retrieve the
book with the password to the computer.
2. Use the password to access the computer and retrieve the
combination to the safe from the text file.
3. Use the spool of kite string to lower the horseshoe magnet into the
hole in the floor and retrieve the key.
4. Use the key to unlock the door and escape the room."

I don't know how ChatGPT could have solved that problem without
actually understanding the words involved. Mere statistical prediction
is unlikely, because I am pretty sure that is not a scenario that
would have been presented in the LLM's training set. Overall, I am
amazed and believe that if ChatGPT is not conscious, then it is pretty
close to being so.


On Sun, Apr 30, 2023, 3:06 PM efc--- via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Extrapolating from premise to conclusion is not so interesting. Huge leaps
> of reasoning would be amazing.
>
> On the other hand, the only difference might perhaps be the size and nr of
> steps from premise to conclusion. ;)
>
> Best regards,
> Daniel
>
>
> On Fri, 28 Apr 2023, Jason Resch via extropy-chat wrote:
>
> >
> >
> > On Fri, Apr 28, 2023, 5:53 PM efc--- via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >
> >
> >       On Thu, 27 Apr 2023, Jason Resch via extropy-chat wrote:
> >
> >       > I thought this was interesting and relevant to discussions of
> what GPT-4 understands.
> >       > Here a professor graded it's responses to the final exam
> questions of a test which was not in the training set used by
> >       GPT since it
> >       > was never put online.
> >       >
> >
> >       But were any papers or data related to the subject in the training
> set?
> >
> >       I did this myself for my own exams and alpaca passed it. My exam
> was not
> >       "in it" but since knowledge of the domain (linux) is in it, of
> course I
> >       can answer the questions.
> >
> >
> > I'm sure some related material is out there, but I've also seen GPT
> solve logic puzzles written from scratch for the purposes of
> > testing GPT, so it must have some reasoning ability beyond rote
> memorization.
> >
> > Jason
> >
> >
> >
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230430/f30de745/attachment.htm>


More information about the extropy-chat mailing list