[ExI] Semiotics and Computability

Spencer Campbell lacertilian at gmail.com
Sat Feb 6 20:09:01 UTC 2010


Stathis Papaioannou <stathisp at gmail.com>:
>Spencer Campbell <lacertilian at gmail.com>:
>> They're extremely different things. We take
>> meaning as input and output, or at least feel like we do, but we
>> simply HAVE understanding.
>>
>> And no, it isn't a substance. It's a measurable phenomenon. Not easily
>> measurable, but measurable nonetheless.
>
> By definition it isn't measurable, since (according to Searle and
> Gordon) it would be possible to perfectly reproduce the behaviour of
> the brain, but leave out understanding. It is only possible to observe
> behaviour, so if behaviour is separable from understanding, you can't
> observe it. I'm waiting for Gordon to say, OK, I've changed my mind,
> it is *not* possible to reproduce the behaviour of the brain and leave
> out understanding, but he just won't do it.

Unfortunately for both you and Gordon, both of you are right in this
case. If you define understanding as I do, that is: to understand a
system is to have a model of that system in your mind, which entails
the ability to correctly guess past or future states of the system
based on an assumed state, or the consequences of an interaction
between this and another understood system.

It's easy to see how this definition covers understanding things like
weather patterns, but it also applies in some unexpected ways. I
understand English. I can guess what will happen in your mind when you
read this sentence; it'll be a pretty inaccurate guess, by any
objective measure, but it will be of a higher quality than pure chance
would predict by many, many orders of magnitude.

To define understanding in terms of associations between symbols does
not make sense to me. I understand that dogs are canines. This has no
relationship whatsoever to my understanding of dogs; I can only make
that statement based on my understanding of English. It's more a fact
about words than it is about animals.

Returning to the original point: Stathis is correct in saying that
understanding has an effect on behavior, and Gordon is correct in
saying that intelligent behavior does not imply understanding. I can
argue these points further if they aren't obvious, but to me they are.
It should be possible, theoretically, to perfectly reproduce human
behavior without reproducing a lick of human understanding. But this
isn't entirely true.

We can set up an experiment in which the (non-understanding) robot
does exactly the same thing as the human, but if we observed the human
and robot in their natural environments for a couple years it ought
soon become obvious that they approach the world in radically
different ways, even if their day-to-day behavior is nearly
indistinguishable.

(The robot I'm thinking of would be built to "understand" the world
right off the bat, rather than learning about things as it goes along,
as we do.)



More information about the extropy-chat mailing list