[Paleopsych] NS: Whatever happened to machines that think?
checker at panix.com
Sat Apr 23 08:56:54 UTC 2005
Whatever happened to machines that think?
* 23 April 2005
* Justin Mullins
THE SINGULARITY: In futurology, a technological singularity is a
predicted point in the development of a civilisation at which
technological progress accelerates beyond the ability of present-day
humans to fully comprehend or predict. The singularity can more
specifically refer to the advent of smarter-than-human intelligence,
and the cascading technological progress assumed to follow.
Wikipedia - the online encyclopedia
CLEVER computers are everywhere. From robotic lawnmowers to
intelligent lighting, washing machines and even car engines that
self-diagnose faults, there's a silicon brain in just about every
modern device you can think of. But can you honestly call any machine
intelligent in a meaningful sense of the word?
One rainy afternoon last February I decided to find out. I switched on
the computer in my study, and logged on to www.intellibuddy.com,
home to one of the leading artificial intelligences on the planet, to
see what the state-of-the-art has to offer.
Intellibuddy: Hi, I'm Intellibuddy!
Nice to meet you.
JM: I'm writing an article about artificial intelligence.
Intellibuddy: Will you include me in your essay?
JM: I will if you can persuade me you really are intelligent.
Intellibuddy: Would you recommend this action to a friend?
JM: Errr...are you intelligent?
Intellibuddy: Ummm I am highly intelligent. I have many intellectual
JM: What are your intellectual functions?
Intellibuddy: Thought, reason, intelligence, logic and deduction.
JM: What is your IQ?
Intellibuddy: My IQ is about 250 in human terms.
JM: Wow, you're cleverer than my editor.
Intellibuddy: I'm glad your [sic] impressed, User.
You think I am cleverer than my editor.
JM: No, cleverer than my editor.
Intellibuddy: I'm glad your impressed, User.
You think I am cleverer than my editor.
JM: No, cleverer than my editor.
Intellibuddy: I'm glad your impressed, User.
You think I am cleverer than my editor...
So much for today's artificial intelligence. Intellibuddy is a version
of one of the world's most successful chatbots, called ALICE
(Artificial Linguistic Internet Computer Entity) and invented in 1995
by Richard Wallace, an independent researcher based in San Francisco.
You can find versions of ALICE all over the web; the software is free.
But whichever version you choose to chat to, the results are
disappointingly similar. While some conversations have promising
starts, all descend into the type of gibberish that only artificial
intelligence can produce.
And it's not as if there hasn't been time to perfect the idea. The
first chatbot appeared in the 1960s. Back then, the very idea of
chatting to a computer astounded people. Today, a conversation with a
computer is viewed more on the level of talking to your pet pooch -
cute, but ultimately meaningless.
The problem with chatbots is a symptom of a deeper malaise in the
field of artificial intelligence (AI). For years researchers have been
promising to deliver technology that will make computers we can chat
to like friends, robots that function as autonomous servants, and one
day, for better or worse, even produce conscious machines. Yet we
appear to be as far away as ever from any of these goals.
But that could soon change. In the next few months, after being
patiently nurtured for 22 years, an artificial brain called Cyc
(pronounced "psych") will be put online for the world to interact
with. And it's only going to get cleverer. Opening Cyc up to the
masses is expected to accelerate the rate at which it learns, giving
it access to the combined knowledge of millions of people around the
globe as it hoovers up new facts from web pages, webcams and data
entered manually by anyone who wants to contribute.
Crucially, Cyc's creator says it has developed a human trait no other
AI system has managed to imitate: common sense. "I believe we are
heading towards a singularity and we will see it in less than 10
years," says Doug Lenat of Cycorp, the system's creator.
But not all AI researchers welcome such claims. To many, they only
fuel the sort of hype that spawned what became known as the "AI
winter" of the 1990s. This was a time that saw government funding for
AI projects cut and hopes dashed by the cold reality that making
computers intelligent in the way we humans perceive intelligence is
just too hard. Many scientists working in areas that were once
considered core AI now refuse even to be associated with the term. To
them, the phrase "artificial intelligence" has been forever tainted by
a previous generation of researchers who hyped the technology and the
fabled singularity beyond reason. For them, the study of artificial
intelligence is a relic of a bygone era that has been superseded by
research with less ambitious, more focused goals. Arguably, this has
already led to a limited form of AI appearing all around us. Elements
of AI research are now used in everything from credit-scoring systems
and automatic camera focus controllers, to number plate recognition in
speed cameras and spaceship navigation.
AI is in many ways as old as computing itself. The purpose of building
computers in the first place was, after all, to perform mathematical
tasks such as code breaking that were too tough for humans to tackle.
It was in 1950 that Alan Turing, the celebrated second world war code
breaker, mathematician and arguably inventor of the first computer,
formulated the test that would become the benchmark by which the
intelligence of all computer programs would subsequently be measured
(see "Turing's test"). Even in Turing's day, computers were beginning
to outperform humans in certain specific tasks. And as early as 1948,
John von Neumann, one of the fathers of the computer revolution, said:
"You insist that there is something a machine cannot do. If you will
tell me precisely what it is that a machine cannot do, then I can
always make a machine which will do just that." It seemed just a
matter of time before computers would outperform people in most mental
But many scientists and philosophers baulked at the idea. They claimed
that there was something about being human that a computer could never
match. At first the arguments centred on properties such as
consciousness and self-awareness, but disagreement over what exactly
these terms meant and how we could test for them prevented the debate
from making any real progress. Others admitted computers could become
intelligent but said they would never develop qualities such as
compassion or wisdom which were uniquely human, the result of our
emotional upbringing and experience. The definition of intelligence
itself began to slip through the philosophers' fingers, and the
disagreements continue today.
Most researchers would at least encompass in their definition of AI
the goal of building a machine that behaves in ways that would be
called intelligent if a human were responsible for that behaviour.
Others would cast the definition even wider. Ant colonies and immune
systems, they say, also behave intelligently in ways that are utterly
non-human. But to get bogged down in the debate is to fall into the
same trap that has plagued AI for decades. The Turing test is a
reasonable yardstick. We will know an intelligent machine when we can
talk to one without realising it is a machine, and programs of
Intellibuddy's ilk clearly fall short of this requirement.
Intellibuddy is merely one of the latest in a long line of chatbots.
In 1966 Joseph Weizenbaum, a computer scientist at the Massachusetts
Institute of Technology, developed the first chatbot, named Eliza
after Eliza Doolittle, the character in George Bernard Shaw's
Pygmalion who is taught to imitate upper-class English speech. The
program was designed to mimic a psychotherapist and conversed with its
patient mainly through a simple rhetorical trick: it reworded the
patient's statements as questions. For example:
Patient: I want to cry
Eliza: Why do you say you want to cry?
Patient: Because my mother hates me
Eliza: Who else in your family hates you?
...and so on. Eliza was programmed to spot key phrases in its
interlocutor's sentences and plug them into preformed sentences of its
own. It was hugely successful. The idea of talking to a computer
astounded people, and there are even anecdotes of people developing
emotional attachments to Eliza.
This early success contributed to a sense of optimism that the
problems of AI could be overcome, much of it based on the idea that
some kind of grand unified theory of mind would emerge that would
offer up a scheme to create artificial intelligence on a platter. The
late 1960s and early 1970s saw feverish speculation about the impact
intelligent machines might have on the world and the advantages they
would bring to whoever developed them. The computer HAL in Stanley
Kubrick's classic 1968 movie 2001: A space odyssey summed up the
visions being debated, and the fears they conjured up.
It was against this backdrop that Japan's Ministry of International
Trade and Industry announced, in 1982, a programme called the Fifth
Generation Computer Systems project to develop massively parallel
computers that would take computing and AI to a new level. The scale
of the project and its ambition were unprecedented and raised fears in
the west that Japan would end up dominating the computer industry in
the same way it had taken the lead in the electronics and automotive
industries. If it developed truly intelligent machines there was no
telling what Japan might be capable of.
An arms race of sorts ensued in which the US and Japan vied for
supremacy. The US Department of Justice even waived monopoly laws so
that a group of American corporations that included giants such as
Kodak and Motorola could join forces to match the Japanese research
effort. Between them they set up the Microelectronics and Computer
Technology Corporation (MCC) and asked Doug Lenat, then a computer
scientist at Stanford University, to lead it. The Defense Advanced
Research Projects Agency (DARPA), the Pentagon's research arm, also
began to take an interest, and injected huge amounts of funding into
But progress was frustratingly slow, and as the hoped-for breakthrough
failed to materialise splits appeared between groups taking different
approaches. On one side were those who believed that the key to
intelligence lay in symbolic reasoning, a mathematical approach in
which ideas and concepts are represented by symbols such as words,
phrases or sentences, which are then processed according to the rules
of logic. Given enough information, the hope was that these symbolic
reasoning systems would eventually become intelligent. This approach
appealed to many researchers because it meant that general proofs
might eventually be found that could simultaneously revolutionise
several branches of AI, such as natural language processing and
But by the early 1990s, it had become clear that the Japanese project
was not leading to any great leap forward in AI. Things were no better
in the US, as most of DARPA's projects failed to produce significant
advances and the agency withdrew much of its support. The repeated
failures of these so-called expert systems - computer programs which,
given specialist knowledge described by a human, use logical inference
to answer queries - caused widespread disillusionment with symbolic
reasoning. The human brain, many argued, obviously worked in a
different way. This led to a spurt of enthusiasm for new approaches
such as artificial neural networks, which at a rudimentary level
imitate the way neurons in the brain work, and genetic algorithms,
which imitate genetic inheritance and fitness to evolve better
solutions to a problem with every generation.
Neural nets got off to a promising start and they are now used in
everything from computer games to DNA sequencing systems. It was hoped
that with sufficient complexity they could demonstrate intelligent
behaviour. But these hopes were dashed because, though neural networks
have the ability to learn from their mistakes, all existing models
failed to develop long-term memory.
In the AI winter that followed, research funds became difficult to
come by and many researchers focused their attention on more specific
problems, such as computer vision, speech recognition and automatic
planning, which had more clearly definable goals that they hoped would
be easier to achieve. The effect was to fragment AI into numerous
sub-disciplines. AI as an all-encompassing field died a sudden and
This fragmentation has had some benefits. It has allowed researchers
to create a critical mass of work aimed at solving well-defined
problems. Computer vision, for example, is now a discipline with its
own journals and conferences. "There are people who spend their whole
careers on this one problem and never consider the other pieces of the
puzzle," says Tom Mitchell, an expert on AI at Carnegie Mellon
University in Pittsburgh, Pennsylvania. The same is true of speech
recognition, text analysis and robot control.
Simple common sense
But Lenat refused to give up. As an academic he had been working on
building a database of common-sense knowledge, which he believed would
be the key to cracking artificial intelligence. When funding for MCC
dried up, he decided in 1994 to go it alone and spin off a company
called Cycorp, based in Austin, Texas, to continue the development of
the AI system he named Cyc.
The driving philosophy behind Cyc is that it should be able to
recognise that in the phrase "the pen is in the box", the pen is a
small writing implement, while in the sentence "the box is in the
pen", the pen is a much larger corral. Lenat reels off examples where
such common-sense distinctions can make all the difference. In speech
recognition, the best way to distinguish between the two spoken
sentences "I've hired seven people" and "I fired seven people", which
can sound virtually identical, is to analyse the context of the
conversation and what it means, rather than just look at the words in
isolation. "Humans would never have this problem, but a computer
program might easily be confused," says Lenat. "There's almost no area
of AI that wouldn't benefit. It's the difference between usability and
Cyc has been meticulously assembled to relate each fact to others
within the database. It knows for example, that in the sentence "each
American has a president" there is only one president, whereas in the
sentence "each American has a mother" there are many millions of
mothers. Cyc's knowledge is stored in the form of logical clauses that
assert truths it has learned. It is based on the symbolic reasoning
systems that failed to deliver in the mid-1990s.
But Lenat and his team have made huge leaps since then. One of the
curious things about Cyc is that the more it knows, the more easily it
can learn. The volume of common sense it has accumulated means it can
begin to make sense of things for itself. And every new fact that
makes sense is incorporated and cross-referenced into its database.
Lenat maintains that the rate at which the system can learn depends on
the amount of common sense it has about the world, and that today's AI
systems perform so badly because they have close to none. "The rate at
which they can learn is close to zero," he says.
One of Cyc's most impressive features is the quality of the deductions
it can make about things it has never learned about directly. For
example, it can tell whether two animals are related without having
been programmed with the explicit relationship between each animal we
know of. Instead, it contains assertions that describe the entire
Linnaean system of taxonomy of plants and animals, and this allows it
to determine the answer through logical reasoning.
Cyc now contains 3 million assertions. Impressive as that is, sheer
numbers are not the point. "We are not trying to maximise the number
of assertions," Lenat says. Rather, he wants to limit them to the bare
minimum that will allow Cyc to collect data on its own. He says Cyc is
getting close to achieving that number, and it is already advanced
enough to query each input itself, asking the human operator to
clarify exactly what is meant.
Sometime this year it will be let loose onto the web, allowing
millions of people to contribute to its fund of knowledge by
submitting questions to Cyc through a web page and correcting it if it
gets the answers wrong. "We're very close to a system that will allow
the average person to enter knowledge," Lenat says. He envisages Cyc
eventually being connected to webcams and other sensors monitoring
environments around the globe, building its knowledge of the world
more or less by itself.
When Cyc goes live, users should expect to get answers to their
questions only some of the time because it won't yet have the common
sense to understand every question or have the knowledge to answer it.
But with the critical mass looming, in three to five years users
should expect to get an answer most of the time. Lenat has pledged to
make access to Cyc freely available, allowing developers of other AI
systems to tap into its fund of common sense to improve the
performance of their own systems.
Lenat's optimism about Cyc is mirrored by a reawakening of interest in
AI the world over. In Japan, Europe and the US, big, well-funded AI
projects with lofty goals and grand visions for the future are once
again gaining popularity. The renewed confidence stems from a new
breed of systems that can deal with uncertainty - something humans
have little trouble with, but which has till now brought computer
programs grinding to a halt.
To cope with the uncertainty of the real world, the new programs
employ statistical reasoning techniques: for example, a robot might
measure the distance to a nearby wall, move and make another similar
measurement. Is it seeing the same wall or a different one? At this
stage it cannot tell, so it assigns a probability to each option, and
then takes further measurements and assigns further probabilities. The
trick, of course, is to ensure that this process converges on a
solution - a map of the room. In practice these systems work most of
the time, but the all too real fear is that the number of calculations
could explode far beyond the robot's capabilities, leaving it
hopelessly confused. Ways of coping with these situations are
currently a hot topic of research.
Systems using the mathematical technique known as Bayesian inference
have improved the performance of many AI programs to the point where
they can be used in the real world. The despised Microsoft Office
paper-clip assistant is based on Bayesian inference systems, as are
pattern-recognition programs that can read text, or identify
fingerprints and irises. Other approaches to AI have produced
specialist programs that have played, and occasionally beaten, world
champions in chess (New Scientist, 17 May 1997, p 13), draughts, and
even poker (New Scientist, 20 December 2003, p 64).
But problems remain. Voice recognition only works usably in ideal
conditions in which there is little or no background noise, and even
then its accuracy is limited (New Scientist, 9 April, p 22). Chess
programs are only capable of beating humans because the game itself
can be reduced to a tree of possible moves, and given enough time
computers are able to evaluate the outcome of each sequence of moves
to find the one that is most likely to lead to checkmate. In the game
of Go the board position is much harder for a computer to evaluate,
and even modestly talented players can beat the most powerful computer
programs. Robots have trouble negotiating obstacles that a
five-year-old can dodge with ease. And if Intellibuddy is anything to
go by, machines you can interact with, that understand what you are
saying and react appropriately, are still some way off.
It's in your head
Where could the secret to intelligence lie? According to Mitchell, the
human brain is the place to look. He has been using functional
magnetic resonance imaging (fMRI) to see which parts of the brain
become active when a person thinks about a specific object. He has
found that when people are asked to imagine a tool such as a hammer or
a building such as a house, the same areas of the brain are activated
as when they are shown a picture of these objects. He has also found
that the area activated for each object - hammer or house - differs by
a discernable amount depending on the object. "You can train a program
to look at the brain images and determine with 90 per cent accuracy
whether that person is thinking about a tool or a building," he says.
Such a program could, eventually, literally read your mind.
For the moment the research is confined to concrete nouns associated
with physical objects. Next Mitchell plans to see how verbs make the
brain light up, and beyond that whether the same areas light up when
these nouns and verbs are incorporated into sentences, or whether
there is additional activation elsewhere in the brain. Mitchell hopes
that this approach will resolve the fundamental differences between
the symbolic reasoning approach to AI, and biologically inspired
approaches such as neural nets.
The clue that Mitchell thinks is significant is that the same part of
the brain seems to be responsible for both reasoning and perception.
So when thinking about a hammer the brain is acting like a symbolic
reasoning system, and when recognising a hammer the brain is acting
like a neural network. "Knowing this could provide some guidance,"
says Mitchell. Exactly how the designers of neural networks might use
such findings is not yet clear. But Mitchell is convinced that this
type of insight from functional brain imaging is set to have a huge
impact on the field.
Of course if Lenat's prediction proves true, by the time Mitchell's
work bares fruit, Cyc may well have reached the singularity. The
history of AI suggests that is unlikely, but after decades of
faltering starts and failed promises, things are beginning to change.
Finally, machines might soon start to think for themselves.
A brief history of AI
1936 Alan Turing completes his paper "On computable numbers" which
paves the way for artificial intelligence and modern computing
1942 Isaac Asimov sets out his three laws of robotics in the book I,
1943 Warren McCulloch and Walter Pitts publish "A logical calculus of
the ideas immanent in nervous activity" to describe neural networks
that can learn
1950 Claude Shannon publishes an analysis of chess playing as a search
1950 Alan Turing proposes the Turing test to decide whether a computer
is exhibiting intelligent behaviour
1956 John McCarthy coins the phrase "artificial intelligence" at a
conference at Dartmouth College, New Hampshire
1956 Demonstration of the first AI program, called Logic Theorist,
created by Allen Newell, Cliff Shaw and Herbert Simon at the Carnegie
Institute of Technology, now Carnegie Mellon University
1956 Stanislaw Ulam develops "Maniac I", the first chess program to
beat a human player, at the Los Alamos National Laboratory
1965 Herbert Simon predicts that "by 1985 machines will be capable of
doing any work a man can do"
1966 Joseph Weizenbaum, a computer scientist at the Massachusetts
Institute of Technology, develops Eliza, the world's first chatbot
1969 Shakey, a robot built by the Stanford Research Institute in
California, combines locomotion, perception and problem solving
1975 John Holland describes genetic algorithms in his book Adaptation
in Natural and Artificial Systems
1979 A computer-controlled autonomous vehicle called the Stanford
Cart, built by Hans Moravec at Stanford University, successfully
negotiates a chair-filled room
1982 The Japanese Fifth Generation Computer project to develop
massively parallel computers and a new artificial intelligence is born
Mid-1980s Neural networks become the new fashion in AI research
1992 Doug Lenat forms Cycorp to continue work on Cyc, an expert system
that's learning common sense
1997 The Deep Blue chess program beats the then world chess champion,
1997 Microsoft's Office Assistant, part of Office 97, uses AI to offer
1999 Remote Agent, an AI system, is given primary control of NASA's
Deep Space 1 spacecraft for two days, 100 million kilometres from
2001 The Global Hawk uncrewed aircraft uses an AI navigation system to
guide it on a 13,000-kilometre journey from California to Australia
2004 In the DARPA Grand Challenge to build an intelligent vehicle that
can navigate a 229-kilometre course in the Mojave desert, all the
entrants fail to complete the course
2005 Cyc to go online
In his essay "Computing Machinery and Intelligence", published in the
philosophical journal Mind in 1950, the British mathematician Alan
Turing argued that it would one day be possible for machines to think
like humans. But if so, how would we ever tell? Turing suggested that
we could consider a machine to be intelligent if its responses were
indistinguishable from a human's. This has become the standard test
for machine intelligence.
In 1990, Hugh Loebner, a New York philanthropist, offered a $100,000
prize for the first computer to beat the test and a $2000 annual prize
for the best of the rest. Past winners include Joseph Weintraub,
three-time winner who developed The PC Therapist, and Richard Wallace,
the man behind ALICE, who has also won three times.
More information about the paleopsych