[ExI] Mental Phenomena

John Clark johnkclark at gmail.com
Wed Feb 12 13:49:49 UTC 2020


On Tue, Feb 11, 2020 at 7:01 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

Hi Brent

>> Suppose there was a parallel Everettian reality that was exactly like
>> our own except that the English language had developed slightly differently
>> so that we called the color of the sky "red" and the color of a
>> strawberry "blue", it wouldn't make any difference because the words chosen
>> were arbitrary, the important thing is that the words be used
>> consistently. And the same thing is true not only for words but for the red
>> and blue qualia themselves. And that's why your color inversion
>> experiment would result in precisely zero objective change in behavior and
>> zero change in subjective feeling, you're experimental subject would have
>> no way of even knowing you had done anything to him at all.
>>
>
> *> This is all obviously true, and I've never disagreed with any of this. *
>

So I guess you now agree we can learn nothing from your color inversion
experiment.


> *> The important part isn't the fact that abstract words are arbitrary,
> what we are talking about is how do you define these arbitrary words. *
>

As I've said before, definitions are just words describing other words, so
if your looking for the ultimate source of meaning you're not going to find
it in a infinite loop, you'll find it in examples. Very young children
become fluent in language at an astonishingly rapid rate and yet you don't
find them reading dictionaries, but you do find them pointing to things and
saying "what's that?".

> *What are the different definition of redness and grenness, which we may
> both call "red"? *
>

There are not different definitions of colors there are different examples
of colors.

>> My axiom is that intelligent behavior implies consciousness,
>>
>
> *> If that were true, then all 3 of these robots which are equally
> intelligent in their ability to pick strawberries
> <https://docs.google.com/document/d/1YnTMoU2LKER78bjVJsGkxMsSwvhpPBJZvp9e2oJX9GA/edit?usp=sharing>
> would be consciousness.  That is inconsistent with the fact that two of
> those robots have knowledge that is not physically arbitrary,*
>

Yes both robot #1 and #2 will have knowledge that the light reflected from
ripe strawberries is different from light reflected from unripe
strawberries, and so it can engage in the intelligent behavior of picking
the ripe but not the unripe ones. And there is nothing inherently red about
glutamate and nothing inherently green about glycine, the chemicals are
just arbitrary labels for a range of wavelengths in the electromagnetic
spectrum, and so are the very words "red" and "green". So if you had a
button that could instantly change one robot's arbitrary notation to the
other's arbitrary notation unless the robot saw you push the button it
would have no way of knowing any change had occurred and it's behavior
would be identical.


> > *the 3rd is, by design, is abstracted away from anything physically
> like anything in an arbitrary way.  And therefor isn't conscious.*
>

I can't make heads or tails out of robot #3. You say:

"Let’s engineer this robot’s knowledge to be abstracted away from any
physical qualities.  It will use the number “1” to represent knowledge of
red things, and “0” for knowledge of green things."

But that's contradictory, the words "red" and "green" are just labels that
a high level language like English uses to represent PHYSICAL QUALITIES,
and a lower level language, the brain's assembly language so to speak (or
maybe machine code), uses glutamate and glycine. And light is a PHYSICAL
QUALITY and whatever label a mind uses to represent it must also be
physical.

You say:
"This robot has multiple diverse kinds of interpreting mechanisms"

But that's another example of the robot NOT abstracting away all physical
qualities. To do any sort of interpreting you're going to have to do data
processing, and to do that you're going to need matter, and not just any
matter but matter that's organized in a fundamentally specific way, a way
that Alan Turing discovered in 1936. That's why even the most advanced book
on number theory can't add 2+2, but the cheapest pocket calculator can, the
matter in the book just isn't organized in the right way.

Brent:
*"The first 2 robots represent knowledge of the strawberry directly on
physical qualities,"*

Yes and although you don't say what it is if robot #3 is to do any sort of
data processing, much less engage in intelligent behavior, it's going to
have to represent numbers with something physical just as the first two
robots did.

>>what particular qualia a external stimulus is bound to may result is
>> differences in brain chemistry but those different chemistries result in no
>> subjective change whatsoever and no change in behavior either.
>>
>
> *> Having troubles parsing this.*
>

Which word didn't you understand?

John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20200212/98d0c985/attachment.htm>


More information about the extropy-chat mailing list