[ExI] Semiotics and Computability

Aware aware at awareresearch.com
Thu Feb 18 15:12:40 UTC 2010


n Thu, Feb 18, 2010 at 6:33 AM, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> --- On Thu, 2/18/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>>> The man cannot understand the symbols - no way, no
>>> how, not in a million years - and when you realize this
>>> you'll learn something important about yourself.
>>
>> Amazingly the brain does understand symbols, even though it
>> is in the same position as the CR, except worse since the neurons are
>> far dumber than even the dumbest man. When you understand this you
>> will understand something important about yourself.
>
> That's also true.
>
> The man cannot understand the symbols and he does no more than implement a program.

Yes.  This is clear and consistent.


> But the human brain understands symbols.

Despite the understandably seductive nature of the belief, and its
natural and expected origins in the necessity that any situated agent,
to be effective, must have a model of itself to which it can refer,
that assertion is unsupported.  It can't even be modeled in any way
that can be tested.


> So, either
>
> 1) the brain does not implement programs, or
> 2) the brain implements programs and does something else also.

Or, the simpler, more coherent explanation that NO SYSTEM "has" any
essential understanding (an essence which you can not even define, but
only point to in de se terms) but many systems demonstrate appropriate
behavior, meaningful only in terms of an observer, even when that
observer is considered a part of the observed.

There is no reason that we should reject functional equivalence,
substrate independence, or computational models of self-aware systems,
nor is there any reason to postulate the existence of some mysterious
"something else."

- Jef



More information about the extropy-chat mailing list