[ExI] Fwd: GPT-4 gets a B on Scott Aaronson's quantum computing final exam

efc at swisscows.email efc at swisscows.email
Sun Apr 30 19:04:55 UTC 2023


Extrapolating from premise to conclusion is not so interesting. Huge leaps 
of reasoning would be amazing.

On the other hand, the only difference might perhaps be the size and nr of 
steps from premise to conclusion. ;)

Best regards,
Daniel


On Fri, 28 Apr 2023, Jason Resch via extropy-chat wrote:

> 
> 
> On Fri, Apr 28, 2023, 5:53 PM efc--- via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
>
>       On Thu, 27 Apr 2023, Jason Resch via extropy-chat wrote:
>
>       > I thought this was interesting and relevant to discussions of what GPT-4 understands.
>       > Here a professor graded it's responses to the final exam questions of a test which was not in the training set used by
>       GPT since it
>       > was never put online.
>       >
>
>       But were any papers or data related to the subject in the training set?
>
>       I did this myself for my own exams and alpaca passed it. My exam was not
>       "in it" but since knowledge of the domain (linux) is in it, of course I
>       can answer the questions.
> 
> 
> I'm sure some related material is out there, but I've also seen GPT solve logic puzzles written from scratch for the purposes of
> testing GPT, so it must have some reasoning ability beyond rote memorization.
> 
> Jason
> 
> 
>



More information about the extropy-chat mailing list