[ExI] Some new angle about AI
Stathis Papaioannou
stathisp at gmail.com
Fri Jan 8 15:48:29 UTC 2010
2010/1/8 Aware <aware at awareresearch.com>:
> But if you watch carefully, he accepts functionalism, IFF the
> candidate machine/substrate actually reproduces the function of the
> brain. But then he goes on to show that for any formal description of
> any machine, there's no place IN THE MACHINE where understanding
> actually occurs.
He explicitly says that a machine could fully reproduce the function
of a brain but fail to reproduce the consciousness of the brain. He
believes that the consciousness resides in the actual substrate, not
the function to which the substrate is put. If you want to extend
"function" to include consciousness then he is a functionalist, but
that is not a conventional use of the term.
> He's right about that.
He actually *does* think there is a place in the machine where
understanding occurs, if the machine is a brain.
> But here he goes wrong: He claims that human brains obviously do have
> understanding, and suggests that he has therefore proved that there is
> something different about attempts to produce the same in machines.
>
> But there's no understanding in the human brain, either, nor any
> evidence for it, EXCEPT FOR THE REPORTS OF AN OBSERVER.
Right.
> We don't have understanding in our brains, but we don't need it.
> Never did. We have only actions, which appear (with good reason) to
> be meaningful to an observer EVEN WHEN THAT OBSERVER IDENTIFIES AS THE
> ACTOR ITSELF.
Searle would probably say there's no observer in a computer.
> Sure it's non-intuitive. It's Zen. In the true, non-bastardized
> sense of the word. And if you're gonna design an AI that displays
> consciousness, then it would be helpful to understand this so you
> don't spin your wheels trying to figure out how to implement it.
You could take the brute force route and copy the brain.
> <snip>
>
>>> So I suggest (again) to you and Gordon, and Searle. that you need to
>>> broaden your context. That there is no essential consciousness in the
>>> system, but in the recursive relation between the observer and the
>>> observed. Even (or especially) when the observer and observed are
>>> functions of he same brain, you get self-awareness entailing the
>>> reported experience of consciousness, which is just as good because
>>> it's all you ever really had.
>>
>> Isn't the relationship between the observer and observed a function of
>> the observer-observed system?
>
> No. The system that is being observed has no place in it where
> meaning/semantics/qualia/intentionality can be said to exist. If you
> look closely all you will find is components in a chain of cause and
> effect. Syntax but no semantics, as Gordon pointed out early on in
> this discussion. But an observer, at whatever level of recursion,
> will report meaning in its terms.
>
> It may help to consider this:
>
> If I ask you (or you ask yourself (Don't worry; it's recursive)) about
> the redness of an apple that you are seeing, that "experience" never
> occurs in real-time. It's always only a product of some processing
> that necessarily takes some time. Real-time experience never happens;
> it's a logical and practical impossibility, So in any case, the
> information corresponding to the redness of that apple, its luminance,
> its saturation, its flaws, its associations with the remembered red of
> a fire truck, and on and on, is in effect delivered or made available,
> after some delay, to another system. And that system will do whatever
> it is that it will do, determined by its nature within that context.
> In the case of delivery to the system (observer) that is going to find
> out about that red, then the observer system will then do something
> with that information (again completely determined by its nature, with
> that context.) The observer system might remark out loud about the
> redness of the apple, and remember doing so. It may say nothing, and
> only store the new perception (of perceiving) the redness. A moment
> later it may use that perception (from memory) again, of course linked
> with newly delivered information as well. If at any point the nature
> of the observer (within context, which might be me asking you what you
> experienced) focuses attention again on information about its internal
> state, the process repeats, keeping the observer process pretty well
> satisfied. From a third-person point of view, there was never any
> meaning anywhere in the system, including within the observer we just
> described. But if you ask the observer about the experience, of
> course it will truthfully report in terms of first-person experience.
> What more is there to say?
Searle would say that experience must be an intrinsic property of the
matter causing the experience. If not, then it would be possible to
get it out of one system reacting to or observing another system as
you describe, which would be deriving meaning from syntax, which he
believes is a priori impossible.
--
Stathis Papaioannou
More information about the extropy-chat
mailing list