[ExI] GPT-3 Improving accuracy

BillK pharos at gmail.com
Thu Jan 27 17:53:29 UTC 2022

On Wed, 26 Jan 2022 at 13:01, BillK <pharos at gmail.com> wrote:
> One of the problems with GPT-3 extracting data from the internet is
> that it often includes misinformation from poor sources. OpenAI is
> trying to correct this.
> <https://openai.com/blog/improving-factual-accuracy/>
> Quote:
> WebGPT: Improving the Factual Accuracy of Language Models through Web
> Browsing      December 16, 2021
> BillK

The new version of GPT-3 is much better behaved (and should be less toxic)
OpenAI has trained its flagship language model to follow instructions,
making it spit out less unwanted text—but there's still a way to go.

By    Will Douglas Heaven     January 27, 2022


OpenAI has built a new version of GPT-3, its game-changing language
model, that it says does away with some of the most toxic issues that
plagued its predecessor. The San Francisco-based lab says the updated
model, called InstructGPT, is better at following the instructions of
people using it—known as “alignment” in AI jargon—and thus produces
less offensive language, less misinformation, and fewer mistakes
overall—unless explicitly told not to do so.

OpenAI found that users of its API favored InstructGPT over GPT-3 more
than 70% of the time. “We're no longer seeing grammatical errors in
language generation,” says Ben Roe, head of product at Yabble, a
market research company that uses OpenAI’s models to create
natural-language summaries of its clients’ business data. “There’s
also clear progress in the new models' ability to understand and
follow instructions."


More information about the extropy-chat mailing list