[ExI] Mental Phenomena

Stathis Papaioannou stathisp at gmail.com
Wed Feb 12 21:20:52 UTC 2020


On Wed, 12 Feb 2020 at 11:00, Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Hi John,
>
> On Tue, Feb 11, 2020 at 3:37 PM John Clark via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Tue, Feb 11, 2020 at 2:24 PM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>> > *For example, the retina, is the mechanical dictionary transducing
>>> system which interprets the red information in the light, to the red
>>> information in the physically different red signal in the optic nerve.
>>> Ultimately, you need to interpret this abstract word like ‘red’, *
>>>
>>
>> And we can. The human retina has 3 different types of light sensors, #1
>> responds to red light #2 responds to green light and #3 responds to blue.
>> If the number in the first column that the brain receives from the eye is
>> larger than zero but the other 2 columns are zero then we interpret that
>> abstract notion with another abstract notion, you see pure red, a dim pure
>> red if the number is small and a intense pure red if the number is large.
>> And if the numbers in the first two columns are of equal size but the third
>> column remains zero then we see yellow, and if the numbers in all 3 columns
>> are equal we see white.
>>
>> Suppose there was a parallel Everettian reality that was exactly like our
>> own except that the English language had developed slightly differently so
>> that we called the color of the sky "red" and the color of a strawberry
>> "blue", it wouldn't make any difference because the words chosen were
>> arbitrary, the important thing is that the words be used consistently.
>> And the same thing is true not only for words but for the red and blue
>> qualia themselves. And that's why your color inversion experiment would
>> result in precisely zero objective change in behavior and zero change in
>> subjective feeling, you're experimental subject would have no way of even
>> knowing you had done anything to him at all.
>>
>
> This is all obviously true, and I've never disagreed with any of this.
> The important part isn't the fact that abstract words are arbitrary, what
> we are talking about is how do you define these arbitrary words.  What are
> the different definition of redness and grenness, which we may both call
> "red"?  Do you use the same as me or are you engineered to have physically
> different knowledge?
>
>
>> My axiom is that intelligent behavior implies consciousness,
>>
>
> If that were true, then all 3 of these robots which are equally
> intelligent in their ability to pick strawberries
> <https://docs.google.com/document/d/1YnTMoU2LKER78bjVJsGkxMsSwvhpPBJZvp9e2oJX9GA/edit?usp=sharing>
> would be consciousness.  That is inconsistent with the fact that two of
> those robots have knowledge that is not physically arbitrary, for which
> there is something it is like to be them.  While the 3rd is, by design, is
> abstracted away from anything physically like anything in an arbitrary
> way.  And therefor isn't conscious.
>

I see no reason why the third robot should lack qualia and the first two
should have them.

*> “Computational binding” is what is done in a CPU.  *
>>>
>>
>> And what particular qualia a external stimulus is bound to may result is
>> differences in brain chemistry but those different chemistries result in no
>> subjective change whatsoever and no change in behavior either.
>>
>
> Having troubles parsing this.
>
> Brent
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20200213/415a7e59/attachment.htm>


More information about the extropy-chat mailing list