[ExI] Mental Phenomena

Stathis Papaioannou stathisp at gmail.com
Thu Feb 13 18:42:25 UTC 2020


On Fri, 14 Feb 2020 at 04:54, Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Can we talk about certain facts you guys continue to ignore?  I keep
> trying to do this with everything including the 3 robots paper
> <https://docs.google.com/document/d/1YnTMoU2LKER78bjVJsGkxMsSwvhpPBJZvp9e2oJX9GA/edit?usp=sharing>,
> but you guys, forever, continue to refuse to acknowledge these facts.
>
>    1. Robot 1s honest and factually correct answer to the questions:
>    "What is redness like for your?" is:
>       1. My redness is like what Stathis experiences when he looks  at
>       something that reflects or emits red light.
>
> But Robot 1 could never know that, so it isn’t honest and factually
correct.


>    1. Robot 2s honest and factually correct answer to the same question
>    is different:
>       1. My redness is different, it is like what stathis experiences
>       when he looks at something that reflects or emits green light.
>
>
And Robot 2 could never know that either. It’s just the nature of
subjective experience; otherwise, it isn’t subjective.

>
>    1. For you guys, the only requirement for something to have "qualia"
>    is that it has the same quantity of memory, and that the robot be able to
>    pick the strawberry identically to robot 1 and 2.
>
> No, I don’t know if something that can do that has qualia, I only know
that I have qualia. I also know that if it does have qualia, the qualia
will not change if a physical change is made that results in no possible
behavioural change.

>
>    1. Your model is, by definition, qualia blind, since it can't account
>       for the fact that the first of these two robots have very different
>       answers, and robot #3 has no justified answer to this question.
>
> All three robots might say the same thing, and we would have no idea what,
if anything, they are actually experiencing.

>
>    1. Your definition of 'qualia' is completely redundant to your
>       system.  You don't need the word 'qualia', and you don't need two words
>       like red and redness, because one word, red, is adequate to model
>       everything you care about.  So, trying to use the redundant term 'qualia'
>       in your system, just makes you look like you are trying to act smart, but
>       obviously are still very qualia blind.
>
> Red is an objective quality, redness is subjective.

>
>    1. You remain like Frank Jackson's Mary, before she steps out of the
>       black and white room.  LIke you, she has abstract descriptions of all of
>       physics.  To you guys, that is all that matters, and you don't care to step
>       out of the room so you can learn the physical qualities your abstract
>       descriptions are describing.
>
> But Mary does not have the subjective experience until she steps out of
the room. She knows about all the physical qualities because they are
objective. If a redness experience were objective she would know that
before she stepped out of the room.

>
>    1. Within your model there is an "Explanatory Gap
>       <https://en.wikipedia.org/wiki/Explanatory_gap>" which cannot be
>       resolved, and there are a LOT of people that have justified arguments for
>       there being a "harde [as in impossible] mind body problem."
>       2. All the arguments you continue to assert, including the neural
>       substitution argument, and your assertion that this #3 robot has qualia,
>       are only justified and only adequate "proofs" in such a qualia blind model
>       which can't account for all these facts.
>          1. Within a less naive model, which is sufficient to account for
>          the above facts, all your arguments, definitions of qualia, and so on, are
>          obviously absurdly mistaken, unjustified, and anything but 'proof'.
>          2. Your so called 'proof' is all you are willing to consider,
>          since you don't care about any of these other facts, and you are perfectly
>          OK with saying robot 3 has 'qualia', even though you have no objective or
>          subjective way of defining what the quali might be like.
>
> Only Robot 3 itself knows if it has qualia. We cannot know if it does or
what they are like. Plugging ourselves into the robot would not give us
this information.

>
>    1.
>
> On Thu, Feb 13, 2020 at 10:15 AM Will Steinberg via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Things are NOT colors.  A strawberry has nothing to do with the red
>> quale, it simply reflects 680 nm light.
>>
>> 680 nm light is NOT a color.  It is interpreted as a red quale when it
>> interfaces with the eyes and brain.
>>
>> Some entities can't sense that light.  Some might see something
>> different.  Some might be moving very fast and experience a doppler effect
>> and not even see the light as 680 nm.  Not only is everything relative, but
>> everything is VERY relative because qualia are not standalone, they only
>> happen when information enters a system.  They depend on both.
>>
>> On Thu, Feb 13, 2020 at 10:12 AM John Clark via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> On Wed, Feb 12, 2020 at 9:31 PM Brent Allsop via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>> *> you guys are all completely qualia blind.*
>>>
>>>
>>> You have 2 possibilities to consider:
>>> 1) Solipsism is true, we are zombies and so we really are qualia blind
>>> and you are the only conscious being in the universe.
>>> 2) You are qualia delusional, that is to say your philosophical ideas
>>> are self contradictory.
>>>
>>> > *Not only do you not know the physical color anything, you don't
>>>> care.*
>>>
>>>
>>> I am unable to care much until you explain exactly (or at least
>>> approximately) what you mean by "physical color". And if it doesn't involve
>>> the subjective ability to notice a change in the wavelength of
>>> electromagnetic radiation and the ability to objectively act on that
>>> differentiation then whatever you mean by it just isn't very interesting. I
>>> mean... if it doesn't effect anything objectively and it doesn't effect
>>> anything subjectively either then I just can't work up much enthusiasm
>>> about studying it.
>>>
>>> > *Having this dictionary will tell us what color things are,*
>>>>
>>>
>>> You keep trying to find the nature of things at the most fundamental
>>> level and yet for some strange reason you keep talking about dictionaries.
>>> A dictionary is a list of definitions of words. Every definition is itself
>>> made of words, every one of those words has its own definition also made of
>>> words, and the infinite loop continues. You're not going to obtain
>>> philosophical insight by reading a dictionary. And if there isn't an
>>> infinite chain of "why" questions and there really is one correct answer to
>>> the consciousness question at the most fundamental level then at some point
>>> in the chain of questions you are going to say "I see a termination because
>>> a miracle occurs here" or if you prefer "a brute fact occurs here". After
>>> all, an effect without a cause does not violate any law of logic.
>>> Fortunately with data processing the miracle is as small as possible
>>> because changes don't get simpler than changing on to off.
>>>
>>>
>>>> > *where we connect our brains with 3 millions neurons, so we can
>>>> directly experience the actual physical colors in other's brains, the same
>>>> way the physical knowledge in our left hemisphere is directly
>>>> computationally bound to the physical knowledge in our right. *
>>>>
>>>
>>> We know with experiments with people that when those 3 million neurons
>>> connecting the brain's hemispheres  are cut the individual who received the
>>> surgery starts acting in ways that are different from the way he acted
>>> before the surgery.  And both hemispheres are capable of acting
>>> independently of the other, and that behavior is different from each other,
>>> and neither matches the behavior of the pre-surgery individual. And it can
>>> be shown that one hemisphere can know things that the other does not. And
>>> so I would maintain neither hemisphere knows what it's like to be the
>>> other, and neither hemisphere knows what it's like to have 2 working
>>> hemispheres connected by 3 million information carrying cables, and the
>>> pre-surgery individual doesn't know what it will be like to have a split
>>> brain in his head.
>>>
>>> *> we aren't jsut some kind of brain in a vat.*
>>>>
>>>
>>> I don't know why you keep saying that as if it's something of
>>> fundamental importance, skulls and vats are just slightly different types
>>> of containers for brains.
>>>
>>> > *And it's up to the experimentalists. *
>>>>
>>>
>>> Exactly, and just like Evolution itself experimentalists can see
>>> intelligent behavior but they can't see qualia or consciousness.
>>> Nevertheless Evolution managed to produce consciousness at least once (in
>>> me) and probably many billions of times, so I conclude consciousness must
>>> be a byproduct of something that Evolution can see, something like
>>> intelligent behavior. And experimentalists can form some conclusions about
>>> qualia and consciousness, but only if they make some assumptions that,
>>> although my hunch is are largely correct, they can't prove and will never
>>> be able to prove.
>>>
>>> *> the current popular consensus that "The supervening qualities are the
>>>> result of the ones and zeroes"*
>>>>
>>>
>>> Ones and zeroes are pure abstractions but information is physical and so
>>> is the difference between a electrical circuit that is open and a
>>> electrical circuit that is closed. So I guess i believe in half of what you
>>> call the "popular consensus" (although in my experience it's not all that
>>> popular).  Supervenience is just a two dollar word for "depends on" and I
>>> think that both intelligent behavior and consciousness is the result of not
>>> ones and zeros but of open/closed or on/off; you can represent one and zero
>>> with on and off if you want but you don't have to, if you're working in
>>> Boolean logic and not arithmetic you can have them represent true or false
>>> or any other binary quality you like.
>>>
>>> *> I'll bet any amount of money, at any odds, that functionalists camps
>>>> will be the first to be experimentally falsified, once experimentalists
>>>> stop being qualia blind. Anyone care to put any money, where their mouth
>>>> is? *
>>>>
>>>
>>> I've been known to make small bets on scientific matters before (and to
>>> be honest I usually ended up losing money) but I refuse to make a bet if I
>>> don't understand exactly, or even approximately, what the bet actually is.
>>>
>>>  John K Clark
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20200214/e9e47141/attachment.htm>


More information about the extropy-chat mailing list