[ExI] AI is racist, bigoted and misogynistic
sen.otaku at gmail.com
Wed Sep 22 13:12:39 UTC 2021
Is that ethical though? To deny people freedom of thought and association?
> On Sep 22, 2021, at 7:18 AM, BillK via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> But there’s an even greater challenge stymieing the machine learning
> community, and it’s starting to make the world’s smartest developers
> look a bit silly. We can’t get the machines to stop being racist,
> xenophobic, bigoted, and misogynistic.
> Text generators, such as OpenAI’s GPT-3, are toxic. Currently, OpenAI
> has to limit usage when it comes to GPT-3 because, without myriad
> filters in place, it’s almost certain to generate offensive text.
> In essence, numerous researchers have learned that text generators
> trained on unmitigated datasets (such as those containing
> conversations from Reddit) tend towards bigotry.
> It’s pretty easy to reckon why: because a massive percentage of human
> discourse on the internet is biased with bigotry towards minority
> This has implications for creating 'friendly' AI that humans hope will
> organise their systems a bit better. If deep-down humans really are
> bigoted, racist and misogynistic, then they will oppose an AI that
> tries to stop them from behaving like that.
> Perhaps humans will have to become like the Borg, with their minds
> 'adjusted' to fit into the system.
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
More information about the extropy-chat