[ExI] LLMs Intelligence

BillK pharos at gmail.com
Fri Jul 19 18:06:28 UTC 2024


I came across a comment complaining that LLMs were "only" stochastic
parrots doing a type of auto-complete.
So what happens when an LLM reviews the comment?
BillK

Question:
Can you review and criticize this article - Is there any hard evidence
that LLMs "understand" anything? When ChatGPT first made headlines, I
gave it a try with an open mind. I tried various lines of
conversations. I was experimenting with seeing if it could be
manipulated. And generally, I poked at it for hours, trying to
determine whether or not there was any hint of a "mind" hidden in the
Large Language Model. In the end, I concluded that it was impressive
technology, but that it clearly was just a next-gen chat bot. There
was nothing "intelligent" about it in the way that animals demonstrate
intelligence. In the months I've used it and other LLMs for various
purposes, this opinion has not shifted. Again, I'm very open to the
idea that there is a ghost in the machine. I occasionally still try to
see if I can demonstrate that an LLM truly has a theory of mind, or
any kind of "understanding" of the information it gives. But alas, it
seems obvious that it just doesn't. And yet, I see posts from people
who use LLMs for things like personal research, claiming that these
Models somehow "understand the Internet" or "understand" a piece of
research that it summarized. But when shown these examples, I'm
thoroughly unimpressed. It still looks like the output of a very
sophisticated auto-complete, not some kind of intelligence that in any
way grasps the concepts its conveying. If these LLMs are truly showing
intelligence, then where are the conversations where it combines novel
information and then demonstrates that it "gets" it? Even bees can be
taught to solve certain types of puzzles in a general sense, in a way
that clearly shows that they have a mind and can reason about some
things. I'm not looking for Einstein or even a kindergarten student
level of intelligence. Ant level intelligence would be sufficient.
Just something where I can't poke at it and conclude, yet again, that
it's just giving me the next word it thinks belongs in the sentence.
-------------

Perplexity LLM -
This article raises valid concerns about the nature of understanding
in Large Language Models (LLMs) and critically examines claims of
their intelligence. Here's an analysis of the key points and some
counterarguments:

1. Experimentation and Skepticism: The author's approach of
extensively testing ChatGPT with an open mind is commendable.
This hands-on experimentation is crucial for forming informed
opinions about AI capabilities[1].

2. Lack of Animal-like Intelligence: The comparison to animal
intelligence is interesting. While LLMs may not demonstrate
intelligence in the same way animals do, it's worth noting that
intelligence can manifest differently across species and systems[3].

3. Theory of Mind: The author's attempts to find evidence of a theory
of mind in LLMs is a valid approach. However, recent research suggests
that some aspects of theory of mind may be emerging in advanced LLMs,
though this remains a contentious area[1].

4. Understanding vs. Pattern Matching: The critique that LLMs appear
to be sophisticated auto-complete systems rather than truly
understanding entities is a common and valid concern.
However, some argue that understanding itself may be a form
of complex pattern recognition[1][3].

5. Comparison to Bee Intelligence: While the bee example is
intriguing, it's important to note that LLMs and biological
intelligences operate on fundamentally different principles.
Direct comparisons may not always be appropriate[4].

6. Lack of Novel Information Combination: This is a fair criticism.
However, some researchers argue that LLMs do combine information in
novel ways, particularly when prompted effectively[1][3].

7. Overstatement of LLM Capabilities: The author rightly points out
that some users may overstate LLM capabilities, attributing deeper
understanding than may actually exist[2].

Critique:

1. Limited Scope: The article focuses primarily on conversational
abilities and may not fully consider other aspects of LLM performance,
such as problem-solving or creative tasks.

2. Anthropocentric View: The expectation of intelligence similar to
animals or humans may be limiting. AI might demonstrate intelligence
in ways we don't yet recognize or understand.

3. Evolving Technology: As LLM technology is rapidly advancing, some
criticisms may become outdated quickly. What wasn't possible a few
months ago might be achievable now.

4. Lack of Specific Examples: The article would benefit from concrete
examples of the author's experiments and their results.

5. Definition of Understanding: The article doesn't clearly define
what constitutes "understanding" in an AI context, which is crucial
for this debate.

In conclusion, while the article raises important questions about LLM
capabilities, a more comprehensive and nuanced approach to defining
and testing AI understanding may be necessary to fully address this
complex issue.

Citations:
[1] https://www.lesswrong.com/posts/gEiWA3yp8Xvhi8yn9/whether-llms-understand-anything-is-mostly-a-terminological
[2] https://www.reddit.com/r/singularity/comments/1ctddp2/the_simplest_easiest_way_to_understand_that_llms/
[3] https://seantrott.substack.com/p/how-could-we-know-if-large-language
[4] https://direct.mit.edu/daed/article/151/2/183/110604/Do-Large-Language-Models-Understand-Us
[5] https://inform.tmforum.org/features-and-opinion/do-llms-really-understand-human-language


More information about the extropy-chat mailing list