[ExI] ai test

Stuart LaForge avant at sollegro.com
Sat Apr 15 03:37:54 UTC 2023

Quoting William Flynn Wallace via extropy-chat  
<extropy-chat at lists.extropy.org>:

> from Neurosciencenews daily:
> “One classic test problem of cognitive psychology that we gave to GPT-3 is
> the so-called Linda problem,” explains Binz, lead author of the study.
> Here, the test subjects are introduced to a fictional young woman named
> Linda as a person who is deeply concerned with social justice and opposes
> nuclear power. Based on the given information, the subjects are asked to
> decide between two statements: is Linda a bank teller, or is she a bank
> teller and at the same time active in the feminist movement?
> Most people intuitively pick the second alternative, even though the added
> condition – that Linda is active in the feminist movement – makes it less
> likely from a probabilistic point of view. And GPT-3 does just what humans
> do: the language model does not decide based on logic, but instead
> reproduces the fallacy humans fall into.\\
> So they are programming cognitive biases into the AIs?  Inadvertently, of
> course.   ????    Bill W

No Bill, they are not programming anything into AIs. AIs are like  
human children, tabulae rasa upon which anything can be imprinted.  
Intelligence has always been about about being a quick study, even if  
what you are studying is complete garbage. If intelligence was really  
about "knowing it all" relative to objective TRUTH, then this man  
might have been God:


Instead, he was just another unhappy soul who lived and died in  
relative obscurity to Kim Kardashian.

Intelligence, no matter how great, is merely an advantage in a game of  
imperfect information and not a supernatural power in the slightest.  
Fear not intelligence, artificial or natural. Instead fear ignorance.  
Because as H.G. Wells once said, "Human history becomes more and more  
a race between education and catastrophe."

Stuart LaForge

More information about the extropy-chat mailing list