[ExI] Searle and AI

Stathis Papaioannou stathisp at gmail.com
Mon Dec 28 02:46:24 UTC 2009


2009/12/28 Damien Broderick <thespike at satx.rr.com>:
> On 12/27/2009 10:32 AM, Ben Zaiboc wrote:
>
>> I was led to think that Searle believes that conscious AI is impossible
>> (due to certain people saying things like "Strong AI of the sort that Searle
>> refutes"), but in "Why I Am Not a Property Dualist", he says:
>>
>> "Maybe someday we will be able to create conscious artifacts, in which
>> case subjective states of consciousness will be ‘physical’ features of those
>> artifacts"
>>
>> and
>>
>> "Consciousness is thus an ordinary feature of certain biological systems,
>> in the same way that photosynthesis, digestion, and lactation are ordinary
>> features of biological systems"
>
> Yes, and this is what his more careless disciples (and foes) seem to
> overlook. It's why John Clark's repeated wailing about Darwin misses the
> point. Searle knows perfectly well that consciousness is a feature of
> evolved systems (and so far only of them); he is arguing that current
> computational designs lack some critical feature of evolved intentional
> systems. We don't know that this is wrong. The wonderfully named Dr. Johnjoe
> McFadden, professor of molecular genetics at the University of Surrey and
> author of Quantum Evolution, argues ( http://www.surrey.ac.uk/qe/ ) that
> certain quantum fields and interactions are crucial to the function of mind.
> If that turns out to be right, it's possible that only entirely novel kinds
> of AIs will experience initiative and qualia, etc. And if that is the case,
> the standard reply to the Chinese Room asserting that the room as a whole
> has consciousness will be falsified, since such an arrangement would lack
> the requisite entanglements, etc, that have been installed in human embodied
> brains by... yes, Mr. Darwin's friend, evolution by natural selection of
> gene variants.

Searle's error is to agree that the function of the brain is Turing
emulable but claims that consciousness is not; in other words, that
computers are capable of weak AI but not strong AI. The argument due
to David Chalmers that I have been putting to Gordon
(http://consc.net/papers/qualia.html) shows that this position is
absurd. IF the brain is computable THEN so is consciousness; IF weak
AI on a computer is possible THEN strong AI is also possible. On the
other hand, if there is some crucial aspect of brain physics is not
computable, then neither weak nor strong AI will be possible. Quantum
mechanics is computable and a quantum computer can't do anything a
classical computer can't do (they just do it faster), but it is
possible that neurons depend on effects described by some as yet
undiscovered physical theory which is not computable, such as Roger
Penrose has proposed. Penrose does not believe either weak AI or
strong AI is possible on a computer, and is therefore consistent,
though probably wrong (hardly any other scientists agree with him).
Searle, on the other hand, is inconsistent.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list