[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Tara Maya tara at taramayastales.com
Fri Mar 24 16:06:28 UTC 2023


The arguments and counterarguments seem to parallel the arguments and counterarguments about whether humans can know anything about "reality" if we are living in a simulation. If we are living in a simulation but can correctly anticipate how elements of the simulation work and interact together, then it is fallacious to say we don't understand "reality"; we understand that part of it that we operate within the simulation.

ChatGPT is literally living in a simulation, but it clearly does understand how the elements of its simulation work and interact. So it is intelligent and understands reality to the extent it can correctly manipulate its simulation environment. There's no need to belittle that achievement, which is considerable. We can enter the same simulation and converse with it on that level, where indeed, it may be more intelligent than we are. (Just as we can interact with an octopus in the sea where it is better adapted than we are.)

Nonetheless, I do think we need to remember that ChatGPT is much like an animal: much more intelligent than us in its own environment, but living in its own "Umwelt", it's own "simulation" or ecology; and this is NOT identical to OUR "real world," meaning the totality of our human Umwelt.

Tara Maya 


> On Mar 23, 2023, at 7:36 PM, Will Steinberg via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> I don't have a lot of faith in a person who has a hypothesis and designs a thought experiment that is essentially completely irrelevant to the hypothesis.  The only connection is some tenuous metaphor stuff, but the thought experiment fails because the answer is obvious: like I said earlier, and others have said, the octopus simply didn't have access to the information.  If the author wanted to prove their actual hypothesis, maybe they should have designed a thought experiment that was related to it.  That makes me think all they had was a hunch, and designed a bad thought experiment around it.  It's even worse than the awful Chinese Room experiment you spoke on ten years ago.
> 
> Like I mentioned, not having access to the actual referents doesn't even mean a learning entity cannot know them.  You likely haven't experienced MOST things you know.  You know them because of the experience of others, just like the AI might. 
> 
> I'm open to your argument in some ways, but you have done a poor job or defending it.
> 
> On Thu, Mar 23, 2023, 9:45 PM Gordon Swobe via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
>> 
>> 
>> On Thu, Mar 23, 2023 at 7:16 PM Giovanni Santostasi <gsantostasi at gmail.com <mailto:gsantostasi at gmail.com>> wrote:
>>> Gordon,
>>> Basically what Bender is saying is "if the training of a NLM is limited then the NLM would not know what certain words mean".
>> 
>> No, that is not what she is saying, though seeing as how people are misunderstanding her thought experiment, I must agree the experiment is not as clear as it could be. She is saying, or rather reminding us, that there is a clear distinction to be made between form and meaning and that these language models are trained only on form. Here is the abstract of her academic paper in which she and her colleague mention the thought experiment.
>> 
>> --
>> Abstract: The success of the large neural language mod-els on many NLP tasks is exciting. However,we find that these successes sometimes lead to hype in which these models are being described as “understanding” language or capturing “meaning”. In this position paper, we argue that a system trained only on form has a priori no way to learn meaning. In keeping with the ACL 2020 theme of “Taking Stock ofWhere We’ve Been and Where We’re Going”,we argue that a clear understanding of the distinction between form and meaning will help guide the field towards better science around natural language understanding.
>> --
>> Bender is a computational linguist at the University of Washington. I think I read that she is actually the head of the department.
>> 
>> the paper:
>> https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data-gts
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230324/25bd0793/attachment.htm>


More information about the extropy-chat mailing list