[ExI] The Computer Scientist Training AI to Think with Analogies

John Grigg possiblepaths2050 at gmail.com
Tue Aug 10 00:33:44 UTC 2021

I remember that *Gödel, Escher, Bach* was Eliezer Yudkowsky's favorite
book. I hope that his think tank is doing well.

"The Pulitzer Prize-winning book *Gödel, Escher, Bach* inspired legions of
computer scientists in 1979, but few were as inspired as Melanie Mitchell
<https://www.santafe.edu/people/profile/melanie-mitchell>. After reading
the 777-page tome, Mitchell, a high school math teacher in New York,
decided she “needed to be” in artificial intelligence. She soon tracked
down the book’s author, AI researcher Douglas Hofstadter, and talked him
into giving her an internship. She had only taken a handful of computer
science courses at the time, but he seemed impressed with her chutzpah and
unconcerned about her academic credentials.

Mitchell prepared a “last-minute” graduate school application and joined
Hofstadter’s new lab at the University of Michigan in Ann Arbor. The two
spent the next six years collaborating closely on Copycat
<https://www.sciencedirect.com/science/article/abs/pii/0167278990900865>, a
computer program which, in the words of its co-creators
<https://theputnamprogram.files.wordpress.com/2012/03/dhfa_chp5.pdf>, was
designed to “discover insightful analogies, and to do so in a
psychologically realistic way.”

The analogies Copycat came up with <https://arxiv.org/abs/2102.10717> were
between simple patterns of letters, akin to the analogies on standardized
tests. One example: “If the string ‘abc’ changes to the string ‘abd,’ what
does the string ‘pqrs’ change to?” Hofstadter and Mitchell believed that
understanding the cognitive process of analogy—how human beings make
abstract connections between similar ideas, perceptions and
experiences—would be crucial to unlocking humanlike artificial intelligence.

Mitchell maintains that analogy can go much deeper than exam-style pattern
matching. “It’s understanding the essence of a situation by mapping it to
another situation that is already understood,” she said. “If you tell me a
story and I say, ‘Oh, the same thing happened to me,’ literally the same
thing did not happen to me that happened to you, but I can make a mapping
that makes it seem very analogous. It’s something that we humans do all the
time without even realizing we’re doing it. We’re swimming in this sea of
analogies constantly.”

As the Davis professor of complexity at the Santa Fe Institute, Mitchell
has broadened her research beyond machine learning. She’s currently leading
SFI’s Foundations of Intelligence in Natural and Artificial Systems
which will convene a series of interdisciplinary workshops over the next
year examining how biological evolution, collective behavior (like that of
social insects such as ants) and a physical body all contribute to
intelligence. But the role of analogy looms larger than ever in her work,
especially in AI—a field whose major advances over the past decade have
been largely driven by deep neural networks, a technology that mimics the
layered organization of neurons in mammal brains.

“Today’s state-of-the-art neural networks are very good at certain tasks,”
she said, “but they’re very bad at taking what they’ve learned in one kind
of situation and transferring it to another”—the essence of analogy."

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20210809/4373c4c9/attachment.htm>

More information about the extropy-chat mailing list