[ExI] Training GPT-3 to play nice - it's difficult!

BillK pharos at gmail.com
Mon Nov 2 15:27:31 UTC 2020


How to make a chatbot that isn’t racist or sexist
Tools like GPT-3 are stunningly good, but they feed on the cesspits of
the internet. How can we make them safe for the public to actually
use?

by  Will Douglas Heaven    October 23, 2020

<https://www.technologyreview.com/2020/10/23/1011116/chatbot-gpt3-openai-facebook-google-safety-fix-racist-sexist-language-ai/>

Quote:
When GPT-3 was released this summer, people were stunned at how good
it was at producing paragraphs that could have been written by a human
on any topic it was prompted with. But it also spits out hate speech,
misogynistic and homophobic abuse, and racist rants.
Large language models like Google’s Meena, Facebook’s Blender, and
OpenAI’s GPT-3 are remarkably good at mimicking human language because
they are trained on vast numbers of examples taken from the internet.
That’s also where they learn to mimic unwanted prejudice and toxic
talk. It’s a known problem with no easy fix.
-------------------

Hmmmmm. So an AI trained to be like humans is likely to be just as
badly behaved as humans?
Should be good at fighting wars then.

BillK



More information about the extropy-chat mailing list