[ExI] The symbol grounding problem in strong AI

BillK pharos at gmail.com
Sun Dec 27 18:22:26 UTC 2009


On 12/27/09, Gordon Swobe wrote:
<snip>
>  Your challenge is to show that replacing natural neurons with your
> mitochondria-less nano-neurons that only behave externally like real
> neurons will still result in consciousness, given that science has now
> (hypothetically) discovered that chemical reactions in mitochondria
> act as the NCC.
>
>  I think you will agree that you cannot show it, and I note that my
> mitochondrial theory of consciousness represents just one of a very
> large and possibly infinite number of possible theories of consciousness
>  that relate to the interiors of natural neurons, any one of which may
> represent the truth and all of which would render your nano-neurons
> ineffective.
>
>

No. Your point of contention is only of interest to armchair
philosophers who have no practical interest in building AI systems.

The Wikipedia article on the Chinese Room points out the irrelevancy
of your philosophical contortions:

Strong AI v. AI research

Searle's argument does not limit the intelligence with which machines
can behave or act; indeed, it fails to address this issue directly,
leaving open the possibility that a machine could be built that acts
intelligently but does not have a mind or intentionality in the same
way that brains do.

Since the primary mission of artificial intelligence research is only
to create useful systems that act intelligently, Searle's arguments
are not usually considered an issue for AI research. Stuart Russell
and Peter Norvig observe that most AI researchers "don't care about
the strong AI hypothesis—as long as the program works, they don't care
whether you call it a simulation of intelligence or real
intelligence."

Searle's "strong AI" should not be confused with "strong AI" as
defined by Ray Kurzweil and other futurists, who use the term to
describe machine intelligence that rivals human intelligence. Kurzweil
is concerned primarily with the amount of intelligence displayed by
the machine, whereas Searle's argument sets no limit on this, as long
as it understood that it is merely a simulation and not the real
thing.
----------------------------

And that's the important point for the future of humanity. We don't
care whether the AGI is 'really' intelligent or just 'simulating'
intelligence. It is the practical results that matter.

BillK



More information about the extropy-chat mailing list