[ExI] Some new angle about AI

Aware aware at awareresearch.com
Fri Jan 8 00:05:04 UTC 2010


On Thu, Jan 7, 2010 at 2:54 PM, Stathis Papaioannou <stathisp at gmail.com> wrote:
> 2010/1/8 Aware <aware at awareresearch.com>:
>
> Searle is explicitly opposed to functionalism.

I admit it's been about 30 years since read Searle's stuff and related
commentary in detail, but I think I've kept a clear understanding of
where he went wrong--and of which I have yet to see evidence of your
understanding.  [That sounds a little harsh, doesn't it? (INTJ)]

Seems to me that Searle accepts functionalism, but never makes it
explicit, possibly for the sake of promoting argument over his beloved
point.

Seems to me that nearly everyone reacts with some disdain to his
apparent affront to functionalism, and then proceeds to argue entirely
on that basis.

But if you watch carefully, he accepts functionalism, IFF the
candidate machine/substrate actually reproduces the function of the
brain. But then he goes on to show that for any formal description of
any machine, there's no place IN THE MACHINE where understanding
actually occurs.

He's right about that.

But here he goes wrong:  He claims that human brains obviously do have
understanding, and suggests that he has therefore proved that there is
something different about attempts to produce the same in machines.

But there's no understanding in the human brain, either, nor any
evidence for it, EXCEPT FOR THE REPORTS OF AN OBSERVER.

We don't have understanding in our brains, but we don't need it.
Never did.  We have only actions, which appear (with good reason) to
be meaningful to an observer EVEN WHEN THAT OBSERVER IDENTIFIES AS THE
ACTOR ITSELF.

Sure it's non-intuitive.  It's Zen.  In the true, non-bastardized
sense of the word. And if you're gonna design an AI that displays
consciousness, then it would be helpful to understand this so you
don't spin your wheels trying to figure out how to implement it.

<snip>

>> So I suggest (again) to you and Gordon, and Searle. that you need to
>> broaden your context.  That there is no essential consciousness in the
>> system, but in the recursive relation between the observer and the
>> observed. Even (or especially) when the observer and observed are
>> functions of he same brain, you get self-awareness entailing the
>> reported experience of consciousness, which is just as good because
>> it's all you ever really had.
>
> Isn't the relationship between the observer and observed a function of
> the observer-observed system?

No.  The system that is being observed has no place in it where
meaning/semantics/qualia/intentionality can be said to exist.  If you
look closely all you will find is components in a chain of cause and
effect.  Syntax but no semantics, as Gordon pointed out early on in
this discussion.  But an observer, at whatever level of recursion,
will report meaning in its terms.

It may help to consider this:

If I ask you (or you ask yourself (Don't worry; it's recursive)) about
the redness of an apple that you are seeing, that "experience" never
occurs in real-time.  It's always only a product of some processing
that necessarily takes some time.  Real-time experience never happens;
it's a logical and practical impossibility,  So in any case, the
information corresponding to the redness of that apple, its luminance,
its saturation, its flaws, its associations with the remembered red of
a fire truck, and on and on, is in effect delivered or made available,
after some delay, to another system. And that system will do whatever
it is that it will do, determined by its nature within that context.
In the case of delivery to the system (observer) that is going to find
out about that red, then the observer system will then do something
with that information (again completely determined by its nature, with
that context.)  The observer system might remark out loud about the
redness of the apple, and remember doing so.  It may say nothing, and
only store the new perception (of perceiving) the redness.  A moment
later it may use that perception (from memory) again, of course linked
with newly delivered information as well.  If at any point the nature
of the observer (within context, which might be me asking you what you
experienced) focuses attention again on information about its internal
state, the process repeats, keeping the observer process pretty well
satisfied.   From a third-person point of view, there was never any
meaning anywhere in the system, including within the observer we just
described.  But if you ask the observer about the experience, of
course it will truthfully report in terms of first-person experience.
What more is there to say?

<snip>

> What about this idea: there is no such thing as semantics, really.
> It's all just syntax.

Yes, well, it all depends on your context, which is what I've been
saying all along.

- Jef



More information about the extropy-chat mailing list