[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Will Steinberg steinberg.will at gmail.com
Thu Mar 23 21:11:30 UTC 2023


This argument makes no sense though.  Of course the octopus doesn't have
access to all the information in A and B's brains.  Why would it know
about bears?  Why would it know how to defend oneself?  Does a baby know
these things before it has learned them?  Does that make the baby
non-conscious?  Terrible argument, doesn't show that the AI is not
conscious or human-like, only that it has less developed sapience than the
humans, which makes sense, because it has had access to a small fraction of
the information the humans have.  You might say that it is not conscious
because it can put together human-looking phrases without having the
referents you speak of, but what's to say it needs them?  Maybe it took a
shortcut to meaning by interpolating those referents.

To be clear I don't think ChatGPT thinks in a human-like manner (just a
hunch, but it's not totally clear since we really have no clue how thought
works) and given that I don't think it's conscious like a human, but I do
think it is conscious, and because it contains
thoughts that originated from conscious humans, I think that the things it
says have some flavor similar to the way we express thoughts, if not the
way we experience them.

On Thu, Mar 23, 2023 at 3:40 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Emily M. Bender, a computational linguist at the University of Washington,
> makes the same argument I hold as valid. Large Language Models are not
> conscious or human-like as they lack referents.
>
> An interesting thought experiment:
>
> "Say that A and B, both fluent speakers of English, are independently
> stranded on two uninhabited islands. They soon discover that previous
> visitors to these islands have left behind telegraphs and that they can
> communicate with each other via an underwater cable. A and B start happily
> typing messages to each other.
>
> Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is unable
> to visit or observe the two islands, discovers a way to tap into the
> underwater cable and listen in on A and B’s conversations. O knows nothing
> about English initially but is very good at detecting statistical patterns.
> Over time, O learns to predict with great accuracy how B will respond to
> each of A’s utterances.
>
> Soon, the octopus enters the conversation and starts impersonating B and
> replying to A. This ruse works for a while, and A believes that O
> communicates as both she and B do — with meaning and intent. Then one day A
> calls out: “I’m being attacked by an angry bear. Help me figure out how to
> defend myself. I’ve got some sticks.” The octopus, impersonating B, fails
> to help. How could it succeed? The octopus has no referents, no idea what
> bears or sticks are. No way to give relevant instructions, like to go grab
> some coconuts and rope and build a catapult. A is in trouble and feels
> duped. The octopus is exposed as a fraud."
>
> You Are Not a Parrot And a chatbot is not a human. And a linguist named
> Emily M. Bender is very worried what will happen when we forget this.
>
> https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/9302bf3f/attachment.htm>


More information about the extropy-chat mailing list