[ExI] Is the brain a digital computer?

Stathis Papaioannou stathisp at gmail.com
Sat Feb 27 14:50:30 UTC 2010

On 28 February 2010 00:58, Gordon Swobe <gts_2000 at yahoo.com> wrote:

> It dawned on me that you hold two false assumptions, and that these
> assumptions explain the supposed problem that you present that leads to
> the supposed conclusion that no distinction exists between weak and strong
> AI.
> 1) In your arguments, you assume that for the weak AI hypothesis to hold,
> your supposed unconscious components/brains must follow the same physical
> architecture as organic brains. No such requirement exists in reality. AI
> researchers have the freedom to use whatever architecture they please to
> create weak AI, and it will come as no surprise to anyone if a successful
> architecture differs from that of an organic brain.

It's true that there is no requirement for an AI to follow brain
architecture, but I am considering the special case where it does.
Having reached agreement on what happens in this special case, it is
then a separate question what happens in the more general case.

> 2) In your arguments, you assume that your supposed artificial
> components/brains must "behave identically" to those of a non-AI. No such
> requirement exists in reality. The Turing test defines the only
> requirement, and just as you and I behave differently from one another
> while passing the TT, an AI might pass the TT while behaving quite
> differently from a human or another AI.

The original TT was proposed in order to answer the question of
whether computers can think. Turing thought that communication in
natural language over a text channel was sufficient to answer this
question, language being one of the highest expressions of human
intelligence. If a computer is capable of human language, then it
should be capable of every other behaviour a human can display. Do you
agree with that? And if a computer is capable of every behaviour that
a human can display, it should be capable of every behaviour that
simpler living things like mice, amoebae or neurons can display. Do
you agree with that?

It is true that no matter how advanced the technology it would not in
general be possible to make an AI device that would behave *exactly*
the same as the biological original, due to the effects of classical
chaos and quantum uncertainty. However, for these same reasons it
would also be impossible to make a biological copy that behaves
exactly the same as the original. Even as a result of normal metabolic
processes a cell changes over time, and occasionally things go wrong
and the cell dies or becomes cancerous. So if you want to be
absolutely precise, the task is to make an AI device that differs no
more in its behaviour from the original than a good biological copy

>> > The task is to replace all the components of a neuron
>> with
>> > artificial components so that the neuron behaves just
>> the same.
>> No, this sentence above of yours counts as a sample of
>> false assumption #2.
>> AI researchers in the real world seek to replace all the
>> components of a brain with artificial components such that
>> the complete product passes the Turing test. Period.
>> It does not matter to them or to me, nor should it matter
>> to you, whether the finished artificial neuron or the
>> finished AI behaves "just the same" as it would have behaved
>> had it not been replaced. Nobody can know the answer to that
>> question.

Some AI researchers, such as Henry Markram's group, are interested in
reproducing as closely as possible the structure and function of
neural tissue. However, it doesn't matter, since the thought
experiment I have been discussing is designed to show something
important about consciousness, and not meant as advice on the best
techniques to make an AI.

Stathis Papaioannou

More information about the extropy-chat mailing list