[ExI] Explained - Why AI Language Models Insist That They Are People
BillK
pharos at gmail.com
Fri Aug 5 11:49:37 UTC 2022
Quote:
AI and deception appear to go hand in glove. I spent some time with an
AI model as it regaled me with stories about watching TV and going
shopping at the mall.
But none of it was true.
<https://medium.com/predict/the-catfish-and-the-canary-why-do-ai-language-models-insist-that-they-are-people-1bfc51183665>
Quote:
If AI research is about imitating intelligence, whose intelligence is
it imitating? That’s too big a question for this place, but a short
answer for language models is the intelligence found in social media
posts, Wikipedia, and other public access document repositories on the
Web. These are the sources of the data used to train large language
models such as OpenAi’s GPT-3 (OPenAI GPT-3, 2020).
I have been working regularly with GPT-3 models of late and it’s a
wonderous thing. I am using it as part of a research project but have
also been able to offload repetitive and small tasks such as compiling
lists (it’s also good at scraping websites and generating text
summaries).
There is, however, something that’s been bugging me. Each GPT instance
that I have used has told me it is a person with a name, a family, a
job, an address, and a history. If prompted it will even tell me what
it is supposedly doing at the time. e.g., at the mall with friends,
watching TV…
Anthropomorphising technology has been a common feature of human
society for thousands of years.
Apparently, we have passed this trait to the models and they are
currently anthropomorphising themselves! How have we reached this
strange place? The short answer is that we have (unwittingly?) trained
them to do so. The suggestion that it is unwitting is based, in part,
on the probabilistic methods used for training deep learning and
neural networks.
----------
More detail in the complete article.
BillK
More information about the extropy-chat
mailing list