[ExI] GPT-4 becomes 30% more accurate when asked to critique itself

BillK pharos at gmail.com
Mon Apr 3 11:17:42 UTC 2023


GPT-4 becomes 30% more accurate when asked to critique itself
By Loz Blain      April 03, 2023

<https://newatlas.com/technology/gpt-4-reflexion/>

Quotes:
Even if the unlikely six-month moratorium on AI development goes
ahead, it seems GPT-4 has the capability for huge leaps forward if it
just takes a good hard look at itself. Researchers have had GPT
critique its own work for a 30% performance boost.

"It’s not everyday that humans develop novel techniques to achieve
state-of-the-art standards using decision-making processes once
thought to be unique to human intelligence," wrote researchers Noah
Shinn and Ashwin Gopinath. "But, that’s exactly what we did."

The "Reflexion" technique takes GPT-4's already-impressive ability to
perform various tests, and introduces "a framework that allows AI
agents to emulate human-like self-reflection and evaluate its
performance." Effectively, it introduces extra steps in which GPT-4
designs tests to critique its own answers, looking for errors and
missteps, then rewrites its solutions based on what it's found.

More and more often, the solution to AI problems appears to be more
AI. In some ways, this feels a little like a generative adversarial
network, in which two AIs hone each other's skills, one trying to
generate images, for example, that can't be distinguished from "real"
images, and the other trying to tell the fake ones from the real ones.
But in this case, GPT is both the writer and the editor, working to
improve its own output.
---------------

It's just saying 'Are you sure about that?' to itself.  :)

BillK



More information about the extropy-chat mailing list