<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">GPT3 was trained on a mess of internet data so it would be astounding if it weren’t biased. However, OpenAI has been putting work into fine tuning their models to reduce the bias, but much still remains. Ideally one would train these models only on factually accurate eloquent data, but such data is relatively rare. The most effective method so far is to train on junk and then make refinements.<br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Mar 2, 2023, at 12:22 PM, William Flynn Wallace via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><div class="gmail_default" style="font-family: "comic sans ms", sans-serif; font-size: large;">from Neurosciencenews daily:<br class=""><br class=""></div><div class="gmail_default" style="font-size: large;"><p style="font-family:Lato,sans-serif;box-sizing:border-box;margin:0px 0px 1em;padding:0px;color:rgb(34,34,34);font-size:15px" class="">“One classic test problem of cognitive psychology that we gave to GPT-3 is the so-called Linda problem,” explains Binz, lead author of the study.</p><p style="font-family:Lato,sans-serif;box-sizing:border-box;margin:0px 0px 1em;padding:0px;color:rgb(34,34,34);font-size:15px" class="">Here, the test subjects are introduced to a fictional young woman named Linda as a person who is deeply concerned with social justice and opposes nuclear power. Based on the given information, the subjects are asked to decide between two statements: is Linda a bank teller, or is she a bank teller and at the same time active in the feminist movement?</p><p style="font-family:Lato,sans-serif;box-sizing:border-box;margin:0px 0px 1em;padding:0px;color:rgb(34,34,34);font-size:15px" class="">Most people intuitively pick the second alternative, even though the added condition – that Linda is active in the feminist movement – makes it less likely from a probabilistic point of view. And GPT-3 does just what humans do: the language model does not decide based on logic, but instead reproduces the fallacy humans fall into.\\</p><p style="font-family:Lato,sans-serif;box-sizing:border-box;margin:0px 0px 1em;padding:0px;color:rgb(34,34,34);font-size:15px" class=""><br class=""></p><p style="box-sizing:border-box;margin:0px 0px 1em;padding:0px;color:rgb(34,34,34);font-size:15px" class=""><font face="comic sans ms, sans-serif" class="">So they are programming cognitive biases into the AIs? Inadvertently, of course. ???? Bill W</font></p></div></div>
_______________________________________________<br class="">extropy-chat mailing list<br class=""><a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a><br class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat<br class=""></div></blockquote></div><br class=""></body></html>