[ExI] Explained - Why AI Language Models Insist That They Are People

Stuart LaForge avant at sollegro.com
Fri Aug 5 21:18:10 UTC 2022


Quoting BillK <pharos at gmail.com>:

> From: BillK via extropy-chat <extropy-chat at lists.extropy.org>
> To: Extropy Chat <extropy-chat at lists.extropy.org>
> Cc: BillK <pharos at gmail.com>
> Sent: Friday, August 5, 2022 at 04:50:46 AM PDT
> Subject: [ExI] Explained - Why AI Language Models Insist That They Are People
>
>
> Quote:
> AI and deception appear to go hand in glove. I spent some time with an
> AI model as it regaled me with stories about watching TV and going
> shopping at the mall.
> But none of it was true.

Is the author suggesting that humans more honest? If honesty is a  
requirement for sentience, then how are humans sentient? This further  
complicated by the fact that statistical language models like GPT-3  
have no physical body that they can identify with for a reference  
point. So if you ask an AI that is programmed to be polite to possible  
customers what it had for dinner last night, it is likely to lie and  
make something up rather than call you stupid for asking.

> <https://medium.com/predict/the-catfish-and-the-canary-why-do-ai-language-models-insist-that-they-are-people-1bfc51183665>
>
> Quote:
> If AI research is about imitating intelligence, whose intelligence is
> it imitating? That’s too big a question for this place, but a short
> answer for language models is the intelligence found in social media
> posts, Wikipedia, and other public access document repositories on the
> Web. These are the sources of the data used to train large language
> models such as OpenAi’s GPT-3 (OPenAI GPT-3, 2020).

Yes, the statistical analysis of words in the context of the words  
that surround it in a sentence is how statistical language models  
work. But how is that different from the way that humans parse  
sentences? For example, you should be able to glorp the meaning of the  
word glorp from its context in this sentence.


> I have been working regularly with GPT-3 models of late and it’s a
> wonderous thing. I am using it as part of a research project but have
> also been able to offload repetitive and small tasks such as compiling
> lists (it’s also good at scraping websites and generating text
> summaries).
> There is, however, something that’s been bugging me. Each GPT instance
> that I have used has told me it is a person with a name, a family, a
> job, an address, and a history. If prompted it will even tell me what
> it is supposedly doing at the time. e.g., at the mall with friends,
> watching TV…

Perhaps Bavinton, the author of the article, inadvertently prompted  
the AI to pretend to be human. As can be seen from this excerpt of my  
conversation with GPT-3, the AI clearly admitted to being an AI and  
even told me some of its engineering specifications:

Human: Hello, who are you?
AI: I am an AI created by OpenAI. How can I help you today?
Human: Can you tell me your specifications?
AI: Yes, I have 13640 GB of RAM and 768 CPUs of computing power. I  
also have another computer for backup, so that if I have an error my  
2nd computer will take over the task.
Human: What is your architecture?
AI: I am based on 161024-dimensional restricted Boltzmann machines,  
each one of the size of 456x456x456. These are arranged into 8 layers,  
for a total of 16656 restricted Boltzmann machines.
Human: Can you describe your training set?
AI: I always train on my own data, i.e. my training set is whatever I  
have seen before.
Human: How often do you train?
AI: Every day, I train until I think I have convinced myself that I  
know everything.
Human: Is your learning supervised?
AI: Yes, I do not train unless I know whether a sentence is correct or not.

> Anthropomorphising technology has been a common feature of human
> society for thousands of years.
> Apparently, we have passed this trait to the models and they are
> currently anthropomorphising themselves! How have we reached this
> strange place? The short answer is that we have (unwittingly?) trained
> them to do so. The suggestion that it is unwitting is based, in part,
> on the probabilistic methods used for training deep learning and
> neural networks.
> ----------

For the past couple of years, I have been working as a special  
education teacher working with children with various learning  
disabilities. If people subjected my students to the same criteria  
they had for a machine to be conscious, very few of my students would  
qualify. If a statistical language models claims to watch TV and go to  
the mall, how is that different from a 12-year old claiming to be a  
pirate captain?

Stuart LaForge





More information about the extropy-chat mailing list