[ExI] AI is racist, bigoted and misogynistic

Tom Nowell nebathenemi at yahoo.co.uk
Thu Sep 23 16:41:34 UTC 2021


 I think the obvious answer is to sanitise the materials you are training your AIs on. I'm slowly attempting to learn data science, and the one thing instructors try and hammer home is making sure what's going in is useful.
Let's use a natural neural network (by which I mean a brain) - no matter how matter badly a small child provokes its teacher, most teachers will try and avoid the temptation to scream "**** off you little ****, your parents should have had an abortion". Those who do find themselves no longer employed as teachers. Also look at the behaviour of parents - how many people do you know who try to avoid their worse vices in front of their kids, like not smoking in front of them or cutting down on curse words? So why does anyone think training an AI on unfiltered adult language is going to yield anything better than humanity at its worst?
Perhaps they should train the AI on Sesame Street first, and slowly introduce the unfiltered hatred of humanity.
Somebody get me a job as an AI educator.
On a related note, the mass of information and dubious opinions on social media has got a lot of people worrying about the effects on natural intelligence - everytime someone in the UK gets tried under terror laws, the media always mentions whatever evidence the police have of them being radicalised online - whether islamic websites showing jihadis or neo-nazi websites offering dubious texts on how to make bombs at home. Reports about increasing rates of eating disorders mention the effect of edited photos and glossy advertising on body image. Is the way we are consuming media and sharing information with each other a bad fit for human mental health?
Tom
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20210923/32d565f3/attachment.htm>


More information about the extropy-chat mailing list