[ExI] ai test

Gadersd gadersd at gmail.com
Thu Mar 2 18:12:24 UTC 2023


GPT3 was trained on a mess of internet data so it would be astounding if it weren’t biased. However, OpenAI has been putting work into fine tuning their models to reduce the bias, but much still remains. Ideally one would train these models only on factually accurate eloquent data, but such data is relatively rare. The most effective method so far is to train on junk and then make refinements.

> On Mar 2, 2023, at 12:22 PM, William Flynn Wallace via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> from Neurosciencenews daily:
> 
> “One classic test problem of cognitive psychology that we gave to GPT-3 is the so-called Linda problem,” explains Binz, lead author of the study.
> 
> Here, the test subjects are introduced to a fictional young woman named Linda as a person who is deeply concerned with social justice and opposes nuclear power. Based on the given information, the subjects are asked to decide between two statements: is Linda a bank teller, or is she a bank teller and at the same time active in the feminist movement?
> 
> Most people intuitively pick the second alternative, even though the added condition – that Linda is active in the feminist movement – makes it less likely from a probabilistic point of view. And GPT-3 does just what humans do: the language model does not decide based on logic, but instead reproduces the fallacy humans fall into.\\
> 
> 
> 
> So they are programming cognitive biases into the AIs?  Inadvertently, of course.   ????    Bill W
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230302/3f4a2ab2/attachment.htm>


More information about the extropy-chat mailing list