[ExI] AI is racist, bigoted and misogynistic

BillK pharos at gmail.com
Wed Sep 22 11:13:58 UTC 2021

But there’s an even greater challenge stymieing the machine learning
community, and it’s starting to make the world’s smartest developers
look a bit silly. We can’t get the machines to stop being racist,
xenophobic, bigoted, and misogynistic.


Text generators, such as OpenAI’s GPT-3, are toxic. Currently, OpenAI
has to limit usage when it comes to GPT-3 because, without myriad
filters in place, it’s almost certain to generate offensive text.

In essence, numerous researchers have learned that text generators
trained on unmitigated datasets (such as those containing
conversations from Reddit) tend towards bigotry.

It’s pretty easy to reckon why: because a massive percentage of human
discourse on the internet is biased with bigotry towards minority

This has implications for creating 'friendly' AI that humans hope will
organise their systems a bit better. If deep-down humans really are
bigoted, racist and misogynistic, then they will oppose an AI that
tries to stop them from behaving like that.
Perhaps humans will have to become like the Borg, with their minds
'adjusted' to fit into the system.


More information about the extropy-chat mailing list