[ExI] Emily M. Bender — Language Models and Linguistics (video interview)

Giovanni Santostasi gsantostasi at gmail.com
Sat Mar 25 23:33:27 UTC 2023


In a scenario where there is no dictionary or translation available and
only the statistical patterns between words in the books can be used, a
computer with significant computational power could potentially learn the
language to some extent. This process would involve using unsupervised
machine learning techniques and analyzing the co-occurrence patterns of
words within the texts. However, the resulting understanding may not be
complete or entirely accurate.

Here are some steps the computer might take to learn Thai from the books:

   1.

   Identify recurring patterns: Analyze the text corpus to find recurring
   patterns, such as common phrases, word sequences, or character
   combinations. These patterns may provide clues to the structure of the
   language, such as word boundaries, grammatical rules, or common sentence
   structures.
   2.

   Cluster similar words: Using unsupervised learning techniques, such as
   clustering algorithms, the computer can group words that frequently occur
   in similar contexts. This can help identify words that have similar
   meanings or serve similar functions in sentences.
   3.

   Infer grammar rules: By analyzing the patterns and relationships between
   words, the computer might be able to infer some basic grammar rules of
   Thai. For example, it may identify word order patterns, agreement
   structures, or tense markers.
   4.

   Create a semantic network: Based on the relationships between words and
   their contexts, the computer can start building a semantic network that
   represents the relationships between different words and their meanings.
   This network can help identify synonyms, antonyms, and other semantic
   relationships between words.
   5.

   Refine understanding: As the computer processes more text, it can refine
   its understanding of the language, improving its ability to identify
   patterns and relationships between words. This process can continue
   iteratively, with each cycle providing a more accurate understanding of the
   language.

While these steps may allow the computer to derive some meaning from the
text, there are limitations to this approach. Without any prior knowledge
or context about the Thai language, culture, or the specific content of the
books, the computer's understanding of the language may be incomplete or
contain inaccuracies. Additionally, some aspects of the language, such as
idiomatic expressions or culturally-specific references, may be difficult
or impossible for the computer to learn solely from statistical patterns.

In summary, while a computer can potentially learn some aspects of a
language through statistical analysis of text, the resulting understanding
may be limited and not entirely accurate. Access to dictionaries,
translations, or other linguistic resources would greatly improve the
accuracy and completeness of the language learning process.
[image: Giovanni Santostasi]
Is there a real life example of the above in real life, in particular in
the attempt to understand a dead language?

Yes, there have been attempts to use computational methods to decipher dead
languages, such as Linear B, Ugaritic, and most famously, the Indus script.
While some of these efforts have produced insights into the structure and
content of these languages, many challenges remain, and the decipherment of
these scripts is still an ongoing process.

One of the most successful examples of deciphering a dead language is the
work of Michael Ventris and John Chadwick on Linear B, an ancient script
used in Bronze Age Crete and Greece. Linear B was deciphered in 1952 by
Ventris, an architect and amateur linguist, who used a combination of
statistical analysis, pattern recognition, and knowledge of ancient Greek
to crack the code. Ventris and Chadwick's work on Linear B laid the
foundation for modern computational methods in decipherment.

In the case of the Indus script, which is still undeciphered, researchers
have employed computational methods to analyze the statistical patterns in
the script and gain insights into its possible structure and content. For
example, Rajesh P.N. Rao, an Indian-American computer scientist, has used
machine learning techniques, such as Markov models and clustering
algorithms, to analyze the Indus script and compare it to other known
languages. While these efforts have not yet led to a complete decipherment,
they have provided valuable insights into the possible structure of the
script and have helped to inform further research.

While there has been progress in using computational methods to decipher
dead languages, it is important to note that these methods have
limitations. In many cases, a deep understanding of the cultural,
historical, and linguistic context is essential for accurate decipherment.
Furthermore, deciphering a dead language often requires collaboration
between researchers from various fields, including linguistics,
archaeology, history, and computer science.

In summary, computational methods have been employed in attempts to
decipher dead languages, and in some cases, have provided valuable
insights. However, the decipherment process is complex and requires a
combination of computational techniques, expert knowledge, and
interdisciplinary collaboration.

On Sat, Mar 25, 2023 at 3:49 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Hi Gordon,
>
> Thanks for sharing this video. I watched and and found the following
> points of interest:
>
> *1. She said they can't possibly be understanding as they are only seeing
> a sequence of characters and predicting distributions and what these models
> do is not the same thing as understanding language.*
> My Reply: These models demonstrate many emergent capabilities that were
> not things that were programmed in or planned. They can answer questions,
> summarize texts, translate languages, write programs, etc. All these
> abilities emerged purely from being trained on the single task of
> predicting text. Given this, can we be certain that "understanding" is not
> another one of the emergent capabilities manifested by the LLM?
>
> *2. She uses the analogy that the LLM looking at characters would be the
> same as a human who doesn't understand Cherokee looking at Cherokee
> characters.*
> My Reply: This is reminiscent of Searle's Chinese Room. The error is
> looking at the behavior of the computer only at the lowest level, while
> ignoring the goings-on at the higher levels. She sweeps all possible
> behavior of a computer under the umbrella of "symbol manipulation", but
> anything computable can be framed under "symbol manipulation" if described
> on that level (including what atoms, or neurons in the human brain do).
> This therefore fails as an argument that no understanding exists in the
> higher-level description of the processing performed by the computer
> program.
>
> *3. She was asked what a machine would have to do to convince her they
> have understanding. Her example was that if Siri or Alexa were asked to do
> something in the real world, like turn on the lights, and if it does that,
> then it has understanding (by virtue of having done something in the real
> world).*
> My Reply: Perhaps she does not see the analogy between turning on or off a
> light, and the ability of an LLM to output characters to a monitor as
> interacting in the real world (turning on and off many thousands of pixels
> on the user's monitor as they read the reply).
>
> *4. She admits her octopus test is exactly like the Turing test. She
> claims the hyper-intelligent octopus would be able to send some
> pleasantries and temporarily fool the other person, but that it has no real
> understanding and this would be revealed if there were any attempt to
> communicate about any real ideas.*
> My Reply: I think she must be totally unaware of the capabilities of
> recent models like GPT-4 to come to a conclusion like this.
>
> *5. The interviewer pushes back and says he has learned a lot about math,
> despite not seeing or experiencing mathematical objects. And has graded a
> blind student's paper which appeared to show he was able to visualize
> objects in math, despite not being sighted. She says the octopus never
> learned language, we acquired a linguistic system, but the hyper
> intelligent octopus has not, and that all the octopus has learned is
> language distribution patterns.*
> My Reply: I think the crucial piece missing from her understanding of LLMs
> is that the only way for them to achieve the levels of accuracy in the text
> that they predict is by constructing internal mental models of reality.
> That is the only way they can answer hypotheticals concerning novel
> situations described to them, or for example, to play chess. The only way
> to play chess with a LLM is if it is internally constructing a model of the
> board and pieces. It cannot be explained in terms of mere patterns or
> distributions of language. Otherwise, the LLM would be as likely to guess
> any potential move rather than an optimal move, and one can readily
> guarantee a chess board position that has never before appeared in the
> history of the universe, we can know the LLM is not relying on memory.
>
> *6. The Interviewer asks what prevents the octopus from learning language
> over time as a human would? She says it requires joint-attention: seeing
> some object paired with some word at the same time.*
> My Reply: Why can't joint attention manifest as the co-occurrence of words
> as they appear within a sentence, paragraph, or topic of discussion?
>
> *7. The interviewer asks do you think there is some algorithm that could
> possibly exist that could take a stream of words and understand them in
> that sense? She answers yes, but that would require programming in from the
> start the structure and meanings of the words and mapping them to a model
> of the world, or providing the model other sensors or imagery. The
> interviewer confirms: "You are arguing that just consuming language without
> all this extra stuff, that no algorithm could just from that, really
> understand language? She says that's right.*
> My Reply: We already know that these models build maps of things
> corresponding to reality in their head. See, for example, the paper I
> shared where the AI was given a description of how rooms were connected to
> each other, then the AI was able to visually draw the layout of the room
> from this textual description. If that is not an example of understanding,
> I don't know what possibly could be. Note also: this was an early model of
> GPT-4 before it had been trained on images, it was purely trained on text.
>
> *8. She says, imagine that you are dropped into the middle of the Thai
> library of congress and you have any book you could possibly want but only
> in Thai. Could you learn Thai? The Interviewer says: I think so. She asks:
> What would you first do, where would you start? She adds if you just have
> form, that's not going to give you information. She then says she would
> have to find an encyclopedia or a translation of a book we know.*
> My Reply: We know there is information (objectively) in the Thai library,
> even if there were no illustrations or copies of books we had the
> translations to. We know the Thai library contains scruitable information
> because the text is compressible. If text is compressible it means there
> are discoverable patterns in the text which can be exploited to reduce the
> amount of bits needed to represent it. All our understanding can be viewed
> as forms of compression. For example, the physical laws that we have
> discovered "compress" the amount of information we need to store about the
> universe. Moreover, when compression works by constructing an internal toy
> model of reality, we can play with and permute the inputs to the model to
> see how it behaves under different situations. This provides a genuine
> understanding of the outer world from which our sensory inputs are based. I
> believe the LLM has successfully done this to predict text, it has various
> internal, situational models it can deploy to help it in predicting text.
> Having these models and knowing when and how to use them, I argue, is
> tantamount to understanding.
>
>
> Jason
>
>
>
> On Sat, Mar 25, 2023 at 4:30 PM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> I mentioned Emily Pender in another thread. She is Professor of
>> Linguistics and Faculty Director of the Master's Program in Computational
>> Linguistics at University of Washington.
>>
>> In the other thread, I made the mistake of introducing her with her
>> Octopus thought experiment which I soon realized from the responses here is
>> easily misinterpreted outside of the context of her general thesis and the
>> academic paper in which she introduced it.
>>
>> As I learned from this interview, she and her colleague Koller wrote that
>> paper in response to a twitter debate in which she found herself arguing
>> with non-linguists who insist that language models understand language.
>> Like me, she is critical of such claims. She considers them "hype."
>>
>> The relevant material starts at the 26 minute mark.
>>
>> https://www.youtube.com/watch?v=VaxNN3YRhBA
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230325/6c7f0a5a/attachment-0001.htm>


More information about the extropy-chat mailing list