[ExI] Semiotics and Computability

x at extropica.org x at extropica.org
Mon Feb 15 00:28:06 UTC 2010


On Sun, Feb 14, 2010 at 2:25 PM, Jeff Davis <jrd1415 at gmail.com> wrote:
> On Sun, Feb 14, 2010 at 2:39 AM, Ben Zaiboc <bbenzai at yahoo.com> wrote:
>
>> What's almost certainly more important is the maps in the brain that represent these body parts, and they could be hooked up to 'fake' body parts that produce the same signals with no loss of, or change in, any mental functions, as long as the fake parts behaved in a manner consistent with the real equivalent (produced hunger signals when blood glucose is low, etc.)
>
> Yes.  This solves the original problem -- which came about, as I see
> it, due to incompleteness in defining the problem, and a consequent
> incompleteness in the simulation -- by completing the simulation.

Seems to me your extension makes no qualitative difference in regard
to the issue at hand.  You already had much of the machinery, and you
added some you realized you left out.  Still nowhere in any formal
description of that machinery, no matter how carefully you look, will
you find any actual "meaning." You''ll find only patterns of stimulus
and response, syntactically complete, semantically vacant.

I.
You're missing the basic systems-theoretic understanding that the
behavior of any system is meaningful only within the context of it's
environment of interaction.  Take the "whole human", e.g. a
description of everything within the boundaries of the skin, and
execute its syntax and you won't get human-like behavior--unless you
also provide (simulate) a suitable environment of interaction.

II.
Now ahead and simulate the human, within an appropriate environment.
You'll get human-like behavior, indistinguishable in principle from
the real thing.  Now you're back to the very correct point of Searle's
Chinese Room Argument:  There is no "meaning" to be found anywhere in
the system, no matter how precise your simulation.

Now Daniel Dennett or Thomas Metzinger or John Pollock (when feeling
bold enough to say it) or Siddhārtha Gautama, or Jef will say "Of
course.  The "consciousness" you seek is a function of the observer,
and you've removed the observer role from the system under
observation.  There is no *essential* consciousness.  Never had it,
never will.  The very suggestion is incoherent: it can't be defined."

The logic of the CRA is correct.  But it reasons from a flawed
premise: That the human organism has this somehow ontologically
special thing called "consciousness."

So restart the music, and the merry-go-round.  I'm surprised no one's
mentioned the Giant Look Up Table yet.

- Jef



More information about the extropy-chat mailing list