[ExI] Some new angle about AI

Aware aware at awareresearch.com
Thu Jan 7 16:28:56 UTC 2010

On Thu, Jan 7, 2010 at 6:40 AM, Stathis Papaioannou <stathisp at gmail.com> wrote:
> 2010/1/7 Aware <aware at awareresearch.com>:
>> ... it seems that everyone here (and Searle) would agree with
>> the functionalist position: that perfect copies must be identical,
>> and thus functionalism needs no defense.
> The functionalist position is that a different machine performing the
> same function would produce the same mind. Searle and everyone on this
> list does not agree with this, nor to be fair is it trivially obvious.

You say "different machine"; I would say "different substrate", but no
matter.  We in this discussion, including Gordon, plus Searle, are of
a level of sophistication that none of us believes in a "soul in the
machine".  Most people in this forum (and other tech/geek forums) have
gotten to that level of sophistication, where they can proudly enjoy
looking down from their improved point of view and smugly denounce
those who don't, while remaining blind to levels of meaning still
higher and more encompassing.

>> Stathis continues to argue on the basis of functional identity, since
>> he doesn't seem to see how there could be anything more to the
>> question. [I know Stathis had a copy of Hofstadter's _I AM A STRANGE
>> LOOP_, but I suspect he didn't finish it.]
> I got to chapter 11, as it happens, and I did mean to finish it but
> still haven't.

I didn't finish it either.  I found it very disappointing (my
expectations set by GEB) in its self-indulgence and its lack of any
substantial new insight.  However, it may be useful for some not
already familiar with its ideas.

> I agree with Hofstdter's, and your, epiphenomenalism.

But it's not most people's idea of epiphenomenalism, where the
"consciousness" they know automagically emerges from a system of
sufficient complexity and configuration.  Rather, its an
epistemological understanding of the (recursive) relationship between
the observer and the observed.

> I usually only contribute to this list when I *disagree* with what
> someone says and feel that I have a significant argument to present
> against it. I'm better at criticising and destroying than praising and
> creating, I suppose.

It's always easier to criticize, but creating tends to be more
rewarding.  Praising tends to fall by the wayside among us INTJs.

> The argument with Gordon does not involve
> proposing or defending any theory of consciousness,

Here I must disagree...

> but simply looks
> at the consequences of the idea that it is possible for a machine to
> reproduce behaviour but not thereby necessarily reproduce the original
> consciousness.

Your insistence that it is this simple is prolonging the cycling of
that "strange loop" you're in with Gordon.  It's not always clear what
Gordon's argument IS--often he seems to be parroting positions he
finds on the Internet--but to the extent he is arguing for Searle, he
is not arguing against functionalism.

Given functionalism, and the "indisputable 1st person evidence" of the
existence of consciousness/qualia/meaning/intensionality within the
system ("where else could it be?"), he points out quite correctly that
no matter how closely one looks, no matter how subtle one's formal
description might be, there's syntax but no semantics in the system.

So I suggest (again) to you and Gordon, and Searle. that you need to
broaden your context.  That there is no essential consciousness in the
system, but in the recursive relation between the observer and the
observed. Even (or especially) when the observer and observed are
functions of he same brain, you get self-awareness entailing the
reported experience of consciousness, which is just as good because
it's all you ever really had.

> It's not immediately obvious that this is a silly idea,
> and a majority of people probably believe it.

Your faith in functionalism is certainly a step up from the
assumptions of the silly masses.  But everyone in this discussion, and
most denizens of the Extropy list, already get this.

>  However, it can be shown
> to be internally inconsistent, and without invoking any assumptions
> other than that consciousness is a naturalistic phenomenon.

Yes, but that's not the crux of this disagreement.  In fact, there is
no crux of this disagreement since to resolve it is not to show what's
wrong within, but to reframe it in terms of a larger context.

Searle and Gordon aren't saying that machine consciousness isn't
possible.  If you pay attention you'll see that once in a while
they'll come right out and say this, at which point you think they've
expressed an inconsistency.  They're saying that even though it's
obvious that some machines (e.g. humans) do have consciousness, it's
also clear that no formal system implements semantics.  And they're

That's why this, and the perennial personal-identity debates tend to
be so intractable:  It's like the man looking for the car keys he
dropped somewhere in the dark, but looking only around the lamppost,
for the obvious reason that that's the only place he can see.  Enlarge
the context.

- Jef

More information about the extropy-chat mailing list