[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Gordon Swobe gordon.swobe at gmail.com
Fri Mar 24 06:18:49 UTC 2023


Nobody least of all me questions that GPT-4 will be capable of amazing
feats, and that eventually these language models will surpass humans in
terms of what we can call intelligence or what I might for sake of clarity
prefer to call apparent intelligence. The question here is whether they
will know what they are saying given that they are trained only on the
forms of words with no access to the meanings or referents.

Adrian has made the excellent point a couple of times that this is like the
first contact problem in science fiction, and actually like the first
contact problem between any two cultures with completely different
languages. Q: When Kirk and Spock beam down to a new planet with
intelligent alien life, how will they learn to communicate? A: With
referents.

Spock will point to himself and say "Spock." Kirk will point to himself and
say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock use
these kinds referents to initiate  communication. If our alien friend wants
to communicate, he will point to the rock and "fwerasa" (or whatever is his
word for rock). He will point to himself and say his name, and so on.
Eventually, Spock and the alien will learn how to translate a few words,
and from there the process of understanding begins.

Now, what if they don't beam down to the planet and listen to only digital
radio signals coming from the planet and send digital radio signals in
return? No communication is possible as there are no referents. It's all
noise.

-gts

>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230324/f3516f21/attachment.htm>


More information about the extropy-chat mailing list