[ExI] Fwd: GPT-4 gets a B on Scott Aaronson's quantum computing final exam

Gordon Swobe gordon.swobe at gmail.com
Sat Apr 29 02:01:17 UTC 2023


On Fri, Apr 28, 2023 at 6:36 PM Giovanni Santostasi via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> As I said I was blown away when GPT-4 told me it made a mistake by
> coloring below the horizon in his helicopter drawing with the same color of
> the sky. I simply told it that I think it made a mistake and to tell me
> what I thought it was. Not sure how it went through the steps of analyzing
> this, in particular, because it was all SVG code. This is not something
> that it could have been solved by accessing some archived information.
> Gordon never addressed how an autocomplete can achieve this type of
> analysis and recognize a subtle error like this.
>

I don’t have the time to participate in every thread but I see my name
mentioned here. I would suppose it used a process very similar to the one
in which it can review and correct itself in ordinary text when asked. I’ve
seen it do so a number of times, and in fact it will review its output even
when asking only indirectly.  I saw it do this when I was testing its
“sense of humor” the other day.

That it can write in various languages including various computer languages
and translate between them and review and correct itself is all very
amazing, but my doubts are not about the power and intelligence of GPT-4.
It’s very cool and impressive, but so was my first Texas Instruments hand
calculator in the 70s.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230428/05dd5b51/attachment.htm>


More information about the extropy-chat mailing list