[ExI] Important paper on Theory of Mind of ChatGPT

Giovanni Santostasi gsantostasi at gmail.com
Thu Feb 23 07:40:07 UTC 2023


This is a very important paper that cuts through a lot of the debate on the
cognitive abilities of current AI systems. The main point is that a Theory
of Mind was not hard programmed in ChatGPT (and the previous GPT versions)
and while the previous versions didn't have a good theory of mind, the
current version of GPT, GPT 3.5 (it doesn't seem they actually tested it on
ChatGPT that is even better than GPT 3.5 because it has also reinforced
learning) has the theory of mind of a 9 years old. I did similar tests
myself (but not in a systematic way) and I can confirm it is the case.

I had a long convo with ChatGPT about this topic and it is a quite refined
understanding of human behavior.
The emergence part is what is really interesting to me because this is the
main learning point in interacting with these systems. Complex behavior can
emerge from relatively simple processes if they have enough free
parameters, complex nonlinear interactions between the parts and a large
data set to train the system.

I truly believe that consciousness itself will emerge in a similar way when
we have enough computational power and the parameter space reaches enough
degrees of freedom similar to that of the human brain (as I said in a
previous comment, that is 3-5 years away).
Giovanni

Here is the paper:
Theory of Mind May Have Spontaneously Emerged in Large Language Models
Michal Kosinski
<https://arxiv.org/search/cs?searchtype=author&query=Kosinski%2C+M>
https://arxiv.org/abs/2302.02083
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230222/cb15d31f/attachment.htm>


More information about the extropy-chat mailing list