[ExI] Some new angle about AI

Aware aware at awareresearch.com
Fri Jan 8 16:42:08 UTC 2010


On Fri, Jan 8, 2010 at 7:48 AM, Stathis Papaioannou <stathisp at gmail.com> wrote:
> 2010/1/8 Aware <aware at awareresearch.com>:
>
>> But if you watch carefully, he accepts functionalism, IFF the
>> candidate machine/substrate actually reproduces the function of the
>> brain. But then he goes on to show that for any formal description of
>> any machine, there's no place IN THE MACHINE where understanding
>> actually occurs.
>
> He explicitly says that a machine could fully reproduce the function
> of a brain but fail to reproduce the consciousness of the brain. He
> believes that the consciousness resides in the actual substrate, not
> the function to which the substrate is put. If you want to extend
> "function" to include consciousness then he is a functionalist, but
> that is not a conventional use of the term.

Searle is conflicted.  Just not at the level you (and most others)
keep focusing on.  When I first read of the Chinese Room back in the
early 80s my first reaction was a bit of disdain for his "obvious"
lack of respect for scientific materialism.  But at the same time I
had the nagging thought that this is an intelligent guy, so maybe
there's something more subtle going on (even though he's still wrong)
and look at how the arguments just keep going around and around.  The
next time I came back to it, later in the 80s, it made complete sense
to me (while he was still wrong but still getting a lot of mileage out
of his ostensible paradox.)

As I've said before on this list, paradox is always a matter of
insufficient context.  In the bigger picture all the pieces must fit.


>> He's right about that.
>
> He actually *does* think there is a place in the machine where
> understanding occurs,

Yes, I've emphasized that mistaken premise as loudly as I could, a few times.

> if the machine is a brain.

or a "fully functional equivalent", WHATEVER THAT MEANS.  Note that
Searle, like Chalmers, does not provide any resolution, but only
emphasizes "the great mystery", the "hard problem" of consciousness.


>> But here he goes wrong:  He claims that human brains obviously do have
>> understanding, and suggests that he has therefore proved that there is
>> something different about attempts to produce the same in machines.
>>
>> But there's no understanding in the human brain, either, nor any
>> evidence for it, EXCEPT FOR THE REPORTS OF AN OBSERVER.
>
> Right.
>
>> We don't have understanding in our brains, but we don't need it.
>> Never did.  We have only actions, which appear (with good reason) to
>> be meaningful to an observer EVEN WHEN THAT OBSERVER IDENTIFIES AS THE
>> ACTOR ITSELF.
>
> Searle would probably say there's no observer in a computer.

I agree that's what he would say.  It works well with popular opinion,
and keeps the discussion spinning around and around.


>> Sure it's non-intuitive.  It's Zen.  In the true, non-bastardized
>> sense of the word. And if you're gonna design an AI that displays
>> consciousness, then it would be helpful to understand this so you
>> don't spin your wheels trying to figure out how to implement it.
>
> You could take the brute force route and copy the brain.

Yes, Markham is working on implementing something like that, and
Kurzweil uses that as his limiting case for predicting the arrival of
"human equivalent" artificial intelligence.  There are complications
with that "obvious" approach, but I have no desire to embark on
another, likely fruitless, thread at this time.


<snip>

>>>> So I suggest (again) to you and Gordon, and Searle. that you need to
>>>> broaden your context.  That there is no essential consciousness in the
>>>> system, but in the recursive relation between the observer and the
>>>> observed. Even (or especially) when the observer and observed are
>>>> functions of he same brain, you get self-awareness entailing the
>>>> reported experience of consciousness, which is just as good because
>>>> it's all you ever really had.

<snip>

>> described.  But if you ask the observer about the experience, of
>> course it will truthfully [without deception] report in terms of first-person experience.
>> What more is there to say?
>
> Searle would say that experience must be an intrinsic property of the
> matter causing the experience. If not, then it would be possible to
> get it out of one system reacting to or observing another system as
> you describe, which would be deriving meaning from syntax, which he
> believes is a priori impossible.

As far as I know, he does NOT say that "experience"
(qualia/meaning/intentionality/consciousness/self/free-will) must be
an intrinsic property of the matter.  He appears content to present it
as a great mystery, one that quite conveniently pushes people's
buttons by appearing on one side to elevate the status of humans as
somehow possessing a special quality, and on the other side by
offending the righteous sensibilities of those who feel they must
defend scientific materialism.  It's all good for Searle as the debate
swirls around him and around and around...

- Jef



More information about the extropy-chat mailing list