[ExI] Mental Phenomena

Dylan Distasio interzone at gmail.com
Thu Feb 13 21:47:13 UTC 2020

Forgive me for shifting gears slightly, but doesn't this also imply that
substrate doesn't matter?

On Thu, Feb 13, 2020 at 4:44 PM Stathis Papaioannou via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Fri, 14 Feb 2020 at 06:45, Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> Stathis,  I've already pointed out, many many times, how what you are
>> saying here, yet again, is only justified in a simplistic world that can't
>> account for all these facts.
>> But, since you continue to prove you don't understand them by repeating
>> them despite this fact let me, yet again, point out the facts you are
>> ignoring.
>> On Thu, Feb 13, 2020 at 11:43 AM Stathis Papaioannou via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>> On Fri, 14 Feb 2020 at 04:54, Brent Allsop via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>> Can we talk about certain facts you guys continue to ignore?  I keep
>>>> trying to do this with everything including the 3 robots paper
>>>> <https://docs.google.com/document/d/1YnTMoU2LKER78bjVJsGkxMsSwvhpPBJZvp9e2oJX9GA/edit?usp=sharing>,
>>>> but you guys, forever, continue to refuse to acknowledge these facts.
>>>>    1. Robot 1s honest and factually correct answer to the questions:
>>>>    "What is redness like for your?" is:
>>>>       1. My redness is like what Stathis experiences when he looks  at
>>>>       something that reflects or emits red light.
>>>> But Robot 1 could never know that, so it isn’t honest and factually
>>> correct.
>> It is a fact that we have composite qualitative experiences where both
>> redness and grenness can be computationally bound.  What you continue to
>> assert here just proves you are ignoring the facts of the matter of what
>> this "computational binding" system must be able to do.  (i.e. the 3.
>> strongest form of effing the ineffable.)  You can't acknowledge that this
>> fact proves this "could never" claim of yours is factually false.
>>>>    1. Robot 2s honest and factually correct answer to the same
>>>>    question is different:
>>>>       1. My redness is different, it is like what stathis experiences
>>>>       when he looks at something that reflects or emits green light.
>>> And Robot 2 could never know that either. It’s just the nature of
>>> subjective experience; otherwise, it isn’t subjective.
>> I've already pointed how this is factually incorrect, yet again, above.
>>  For you guys, the only requirement for something to have "qualia" is
>>> that it has the same quantity of memory, and that the robot be able to pick
>>> the strawberry identically to robot 1 and 2.
>> No, I don’t know if something that can do that has qualia, I only know
>>> that I have qualia. I also know that if it does have qualia, the qualia
>>> will not change if a physical change is made that results in no possible
>>> behavioural change.
>>>>    1. Your model is, by definition, qualia blind, since it can't
>>>>       account for the fact that the first of these two robots have very different
>>>>       answers, and robot #3 has no justified answer to this question.
>>>> All three robots might say the same thing, and we would have no idea
>>> what, if anything, they are actually experiencing.
>> Yet again, I've pointed out above that this is just factually incorrect,
>> above.
>>>>    1. Your definition of 'qualia' is completely redundant to your
>>>>       system.  You don't need the word 'qualia', and you don't need two words
>>>>       like red and redness, because one word, red, is adequate to model
>>>>       everything you care about.  So, trying to use the redundant term 'qualia'
>>>>       in your system, just makes you look like you are trying to act smart, but
>>>>       obviously are still very qualia blind.
>>>> Red is an objective quality, redness is subjective.
>> Why are you saying this and again ignoring the facts?  I've been
>> attempting to point out exactly this same thing in everything I say.  While
>> there is a falsifiable possibility that these two are different, it is also
>> a possibility that they are the same, and that the objective is just an
>> abstract(no qualitative information) description of the subjective
>> (qualitative because unlike the objectively perceived things, we can be
>> directly aware of it).
>>>>    1. You remain like Frank Jackson's Mary, before she steps out of
>>>>       the black and white room.  LIke you, she has abstract descriptions of all
>>>>       of physics.  To you guys, that is all that matters, and you don't care to
>>>>       step out of the room so you can learn the physical qualities your abstract
>>>>       descriptions are describing.
>>>> But Mary does not have the subjective experience until she steps out of
>>> the room. She knows about all the physical qualities because they are
>>> objective. If a redness experience were objective she would know that
>>> before she stepped out of the room.
>> Again, you are proving that you are not understanding the fact I'm
>> pointing out when I say objective information is by design, abstracted away
>> from any particular physical qualities, and hence contains no qualitative
>> information.  Because the word "red" isn't physically red, you need a
>> dictionary to know what it means.  Stathis doesn't need a dictionary to
>> know what his redness is qualitatively like.
>>>>    1. Within your model there is an "Explanatory Gap
>>>>       <https://en.wikipedia.org/wiki/Explanatory_gap>" which cannot be
>>>>       resolved, and there are a LOT of people that have justified arguments for
>>>>       there being a "harde [as in impossible] mind body problem."
>>>>       2. All the arguments you continue to assert, including the
>>>>       neural substitution argument, and your assertion that this #3 robot has
>>>>       qualia, are only justified and only adequate "proofs" in such a qualia
>>>>       blind model which can't account for all these facts.
>>>>          1. Within a less naive model, which is sufficient to account
>>>>          for the above facts, all your arguments, definitions of qualia, and so on,
>>>>          are obviously absurdly mistaken, unjustified, and anything but 'proof'.
>>>>          2. Your so called 'proof' is all you are willing to consider,
>>>>          since you don't care about any of these other facts, and you are perfectly
>>>>          OK with saying robot 3 has 'qualia', even though you have no objective or
>>>>          subjective way of defining what the quali might be like.
>>>> Only Robot 3 itself knows if it has qualia. We cannot know if it does
>>> or what they are like.
>> Again, proving you are ignoring the facts I pointed out above, proving
>> this statement factually incorrect.
>>> Plugging ourselves into the robot would not give us this information.
>> Finally!!  It almost sounds like you are acknowledging the fact that we
>> have a computational binding system.  Thank you.
>> The only problem is, you are making this claim that this will not enable
>> us to eff the ineffable with zero justification (other than might be
>> possible an inadequate qualia blind model).
>> Again, It remains a fact that we can have computationally bound
>> experiences composed of both red and green.  It is a fact that we know both
>> what that redness is like, and how it is different than greeness, as surely
>> as Descartes knew that because he thinks, he necessarily exists.  It is
>> also a fact that since you can do it between two brain hemispheres, you can
>> also do it between 4, proving your claim here incorrect.
> We have done the experiment with neural implants, thousands of times, in
> patients who have an artificial cochlea. The qualia from the cochlea are
> fully integrated into the consciousness of the subject, and appropriately
> “bound” with other qualia. Yet we have no idea what, if anything, an
> artificial cochlea experiences.
>> --
> Stathis Papaioannou
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20200213/041e303a/attachment.htm>

More information about the extropy-chat mailing list