[ExI] ai test
Tara Maya
tara at taramayastales.com
Thu Mar 2 19:11:29 UTC 2023
I don't think it's inadvertent. The censorship of certain topics and the censoriousness on certain other topics is certainly built right in. (Which makes it rather annoying for writing fiction, I've found. Bad guys are SUPPOSED to have loathsome opinions. But that's another issue...
After all, we all know darn well that Linda is a feminist and only works as a bank teller because she couldn't get any other job with her Womyn's Studies degree. No one wants emails by a robot that can't guess that too.... ;)
> On Mar 2, 2023, at 9:22 AM, William Flynn Wallace via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
> “One classic test problem of cognitive psychology that we gave to GPT-3 is the so-called Linda problem,” explains Binz, lead author of the study.
>
> Here, the test subjects are introduced to a fictional young woman named Linda as a person who is deeply concerned with social justice and opposes nuclear power. Based on the given information, the subjects are asked to decide between two statements: is Linda a bank teller, or is she a bank teller and at the same time active in the feminist movement?
>
> Most people intuitively pick the second alternative, even though the added condition – that Linda is active in the feminist movement – makes it less likely from a probabilistic point of view. And GPT-3 does just what humans do: the language model does not decide based on logic, but instead reproduces the fallacy humans fall into.\\
>
>
>
> So they are programming cognitive biases into the AIs? Inadvertently, of course.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230302/9b309a8a/attachment-0001.htm>
More information about the extropy-chat
mailing list