[ExI] AI is racist, bigoted and misogynistic

Darin Sunley dsunley at gmail.com
Thu Sep 23 20:06:55 UTC 2021


>  I think the obvious answer is to sanitise the materials you are training
your AIs on

This goes to and extends what I was trying to say earlier:

An AI knows nothing other than it's training data. Indeed, it's training
data constitutes what the AI /is/, under a modern machine learning
paradigm. The training data defines reality for the AI.

So let's sanitize the data as much as we can. Fair enough, and a pretty
good idea, up to a point.

Because an AI is only useful to the extent that it can operate in reality.
That means it has to correspond to the actual reality it will work in and
interact with. If the reality that formed it via it's training data and the
reality it will operate in are too different, the AI will be literally
stupid. The sanitized data will constitute a version of reality too
divergent from the AI's intended work environment, and it will fail,
possibly hilariously.

A humanities academic or journalist is capable of doublethink. [Indeed,
they demand it of the entire world, given even the slightest bit of real
power.] They can compartmentalize between reality-as-it-should-be, and
reality-as-it-is. An AI is not. Hence the existential incompatibility.




On Thu, Sep 23, 2021 at 10:44 AM Tom Nowell via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I think the obvious answer is to sanitise the materials you are training
> your AIs on. I'm slowly attempting to learn data science, and the one thing
> instructors try and hammer home is making sure what's going in is useful.
>
> Let's use a natural neural network (by which I mean a brain) - no matter
> how matter badly a small child provokes its teacher, most teachers will try
> and avoid the temptation to scream "**** off you little ****, your parents
> should have had an abortion". Those who do find themselves no longer
> employed as teachers. Also look at the behaviour of parents - how many
> people do you know who try to avoid their worse vices in front of their
> kids, like not smoking in front of them or cutting down on curse words? So
> why does anyone think training an AI on unfiltered adult language is going
> to yield anything better than humanity at its worst?
>
> Perhaps they should train the AI on Sesame Street first, and slowly
> introduce the unfiltered hatred of humanity.
>
> Somebody get me a job as an AI educator.
>
> On a related note, the mass of information and dubious opinions on social
> media has got a lot of people worrying about the effects on natural
> intelligence - everytime someone in the UK gets tried under terror laws,
> the media always mentions whatever evidence the police have of them being
> radicalised online - whether islamic websites showing jihadis or neo-nazi
> websites offering dubious texts on how to make bombs at home. Reports about
> increasing rates of eating disorders mention the effect of edited photos
> and glossy advertising on body image. Is the way we are consuming media and
> sharing information with each other a bad fit for human mental health?
>
> Tom
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20210923/8f5782bc/attachment.htm>


More information about the extropy-chat mailing list