[extropy-chat] FWD (SK) NYTimes.com Article: (14) Can Robots Become Conscious?

Terry W. Colvin fortean1 at mindspring.com
Tue Nov 11 23:30:09 UTC 2003


(14) Can Robots Become Conscious?

November 11, 2003
  By KENNETH CHANG


It's a three-part question. What is consciousness? Can you
put it in a machine? And if you did, how could you ever
know for sure?

Unlike any other scientific topics, consciousness - the
first-person awareness of the world around - is truly in
the eye of the beholder. I know I am conscious. But how do
I know that you are?

Could it be that my colleagues, my friends, my editors, my
wife, my child, all the people I see on the streets of New
York are actually just mindless automatons who merely act
as if they were conscious human beings?

That would make this question moot.

Through logical
analogy - I am a conscious human being, and therefore you
as a human being are also likely to be conscious - I
conclude I am probably not the only conscious being in a
world of biological puppets. Extend the question of
consciousness to other creatures, and uncertainty grows. Is
a dog conscious? A turtle? A fly? An elm? A rock?

"We don't have the mythical consciousness meter," said Dr.
David J. Chalmers, a professor of philosophy and director
of the Center for Consciousness Studies at the University
of Arizona. "All we have directly to go on is behavior."

So without even a rudimentary understanding of what
consciousness is, the idea of instilling it into a machine
- or understanding how a machine might evolve consciousness
- becomes almost unfathomable.

The field of artificial intelligence started out with
dreams of making thinking - and possibly conscious -
machines, but to date, its achievements have been modest.
No one has yet produced a computer program that can pass
the Turing test.

In 1950, Alan Turing, a pioneer in computer science,
imagined that a computer could be considered intelligent
when its responses were indistinguishable from those of a
person. The field has evolved to focus more on solving
practical problems like complex scheduling tasks than on
emulating human behavior.

But with the continuing gains in computing power, many
believe that the original goals of artificial intelligence
will be attainable within a few decades.

Some people, like Dr. Hans Moravec, a professor of robotics
at Carnegie Mellon University in Pittsburgh, believe a
human being is nothing more than a fancy machine, and that
as technology advances, it will be possible to build a
machine with the same features, that there is nothing
magical about the brain and biological flesh.

"I'm confident we can build robots with behavior that is
just as rich as human being behavior," he said. "You could
quiz it as much as you like about its internal mental life,
and it would answer as any human being."

To Dr. Moravec, if it acts conscious, it is. To ask more is
pointless.

Dr. Chalmers regards consciousness as an ineffable trait,
and it may be useless to try to pin it down. "We've got to
admit something here is irreducible," he said. "Some
primitive precursor consciousness could go all the way
down" to the smallest, most primitive organisms, even
bacteria, he said.

Dr. Chalmers too sees nothing fundamentally different
between a creature of flesh and blood and one of metal,
plastics and electronic circuits. "I'm quite open to the
idea that machines might eventually become conscious," he
said, adding that it would be "equally weird."

And if a person gets into involved conversations with a
robot about everything from Kant to baseball, "we'll be as
practically certain they are conscious as other people,"
Dr. Chalmers said.

"Of course, that doesn't resolve the theoretical question,"
he said.

But others say machines, regardless of how complex, will
never match people.

The arguments can become arcane. In his book "Shadows of
the Mind," Dr. Roger Penrose, a mathematician at Oxford
University in England, enlisted the incompleteness theorem
in mathematics. He uses the theorem, which states that any
system of theorems will invariably include statements that
cannot be proven, to argue that any machine that uses
computation - and hence all robots - will invariably fall
short of the accomplishments of human mathematicians.

Instead, he argues that consciousness is an effect of
quantum mechanics in tiny structures in the brain that
exceeds the abilities of any computer.


http://www.nytimes.com/2003/11/11/science/11MACH.html?ex=1069533728&ei=1&en=b2c8d5710ed32897

Copyright 2003 The New York Times Company 


-- 
“Only a zit on the wart on the heinie of progress.” Copyright 1992, Frank Rice


Terry W. Colvin, Sierra Vista, Arizona (USA) < fortean1 at mindspring.com >
     Alternate: < fortean1 at msn.com >
Home Page: < http://www.geocities.com/Area51/Stargate/8958/index.html >
Sites: * Fortean Times * Mystic's Haven * TLCB *
      U.S. Message Text Formatting (USMTF) Program
------------
Member: Thailand-Laos-Cambodia Brotherhood (TLCB) Mailing List
   TLCB Web Site: < http://www.tlc-brotherhood.org >[Vietnam veterans,
Allies, CIA/NSA, and "steenkeen" contractors are welcome.]



More information about the extropy-chat mailing list