[ExI] Possible seat of consciousness found
Brent Allsop
brent.allsop at gmail.com
Thu Mar 12 23:10:42 UTC 2020
Hi Stathis and Stuart,
Thanks for your continued responses. I apologize for taking so long to get
back. Things have been extremely busy with work and personal life. I’m
finally finding time to reply to both of your responses.
On Thu, Mar 5, 2020 at 10:31 AM Stuart LaForge via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
> Quoting Brent Allsop:
>
> > Hi Stuart,
> > Thanks for the feedback on my terminology. That really helps.
> > But, it would help if you could provide more evidence that you understand
> > why I'm using the terminology I am.
> > Much of what you are saying is evidence to me you don't yet understand
> the
> > model I'm trying to describe and what "qualia blindness" means.
>
> The simple fact that you are unsure whether I understand you or not
> indicates that you yourself are qualia blind. Moreover, the fact that
> your model is itself an abstraction indicates that your model is
> qualia blind. The sad truth is that your terminology and model have no
> physical qualities at all. They are just pictures and words composed
> of Shannon information i.e. literal bits on a bitmap projected on my
> screen and therefore qualia blind no matter how sublime and wondrous
> they may be in your own head.
>
> > On Tue, Feb 25, 2020 at 9:28 PM Stuart LaForge via extropy-chat <
> > extropy-chat at lists.extropy.org> wrote:
> >
> >> "Qualia blindness" sounds too pejorative to be useful as a term of
> >> art. You should stop using it especially since you tend to apply it to
> >> people who disagree with you and you have so much trouble explaining
> >> what it means. Perhaps "qualia denial" or "qualia denier" would be a
> >> better and more accurate term, since even Daniel Dennett experiences
> >> qualia, even though he doesn't believe them to be important.
> >>
> >
> > Saying this kind of stuff is strong evidence that you still don't
> > understand the model I'm trying to describe.
>
> Maybe it is because your description contradicts itself. Your
> description is merely abstract and has none of the physical qualities
> that it extols. Robots 1 & 2 claim that they experience physical
> qualities when seeing a strawberry, but a robot could be programmed to
> say that using a simple lookup table, regardless of it being true or
> not. Since there is no way for robots 1 and 2 to use abstract words
> and rules of grammar to prove that they see qualia, they are
> ultimately no different than robot 3 who simply admits that his
> knowledge is abstract. Especially since robot 3 might be lying too
> because he is afraid somebody might try to "fix" him if he displays
> signs of consciousness. Therefore there is no way for you to
> communicate ANY model that is not qualia blind since communication
> requires abstraction and information and therefore qualia blind.
>
> > "Qualia Blindness" is similar to the "pejorative" term "Naive Realism".
> > That fact that it is "pejorative" doesn't really matter compared to the
> > facts it is describing. In fact, some people think the fact that "Naive
> > Realism" is factually "Naive" is a good thing
> > <https://ida.mtholyoke.edu/xmlui/handle/10166/4025>. I would bet that
> John
> > would agree that his view is 'qualia blind' and that he is perfectly OK
> > with using one word for all things 'red' as he has indicated multiple
> > times. And of course, saying you should ignore qualia, as Dennett does,
> is
> > the very definition of being qualia blind. Dennett openly admits that.
>
> How does the number of words one uses to to describe "all things red"
> matter in the slightest? Words are words. That your model uses a
> thousand of words to try to explain redness does not make your model
> any less qualia blind than the single word "red".
>
> > Qualia blindness is as qualia blindness does. If you only have one word,
> > for all things red, that is, by definition, qualia blindness. It is
> simply
> > a fact that having one word for all things 'red' tells you nothing of the
> > actual physical qualities of any of the many things it is a label for.
> > Having a model (and the language of such) of the physical world that
> > ignores, or does not include qualia, is, by definition, qualia blind.
> If
> > you don't like the term "qualia blind" then every time I use it please
> > substitute it with: "Have amy model of physical reality that does not
> > include objectively observable qualia." Or anyone that claims we should
> > "quine qualia" and so on.
>
> Again your model is composed of words and pictures all of which are
> mere abstract Shannon information and and therefore devoid of
> objectively observable qualia. Your model is just as qualia blind as
> all the models that it presumes to criticize on that account.
>
> >> Not at all. Inverted perception in no way proves that qualia are not
> >> "phantasms of the mind" to use Newton's terminology. In fact, the
> >> rewiring I described between the retina and the visual cortex is
> >> specifically in reference to the signalling pathway model.
> >>
> >
> > More evidence you are not understanding what I'm trying to say. If you
> put
> > a red green signal inverter in the optic nerve
> > <https://canonizer.com/videos/consciousness/?chapter=Perception_Inverted
> >,
> > The light and the "L-cone" will be firing 'red', but the knowledge will
> not
> > have a redness quality, it will have a greeness quality. This fact
> > necessarily proves that neither the L-cone, nore the 'red' light, (as you
> > are claiming) have anything to do with the physical quality of knowledge
> > (since it isn't redness, it is greeness when the inverter is in place).
>
> Criss-crossing the neural pathways is the only technologically
> feasible way that an inline red-green inverter could work. I doubt
> that the signal transduced from an L-cone is all that different from
> the signal transduced from an M-cone except with regard to the
> specific pathway it takes from the retina to the visual cortex.
> Sending one cone's signal down the other cone's pathway will trigger
> greenness in response to red.
>
> So how does your red-green signal inverter work? Magic?
>
> > I'm in a different camp
> > <https://canonizer.com/topic/79-Neural-Substtn-Fallacy/2>.
> Functionalist
> > are the only ones with a "hard problem" which they have no idea how to
> > address, let alone having any way of verifying what they think must be.
> > While my prediction is that they will first falsify "glutamate" as being
> > the same as "redness" and then experimentalists will substitute glutamate
> > with something else physical.
>
> I am not so certain about the "substrate independence" of specific
> qualia. While there is no reason why a simulated brain could not
> produce qualia, there is also no checksum to verify the fidelity of
> any simulated qualia. I suppose that any particular quale could be
> reproduced in many different substrates but would be the highest
> fidelity in its native substrate. Kind of like using a virtual machine
> to run Windows on a Mac. It works but not quite as well as a native
> installation on a PC. So would that be partial substrate independence?
>
> > Your "signalling pathway model" is a great
> > model. It's about the only prediction of what qualia are that nobody has
> > created a camp for yet. Experimental results could certainly verify it
> is
> > a "readiness pathway" that we experience as redness right? Would you be
> > willing to help us create a "signalling pathway model" camp, so
> > experimentalists have another way to test for this qualia possibility?
>
> Thanks. As I have mentioned before, it is part of a larger theory on
> the emergent properties of synergistic systems. I suppose I could
> write something up specifically in reference to qualia.
>
> > But
> > if experimentalists found a particular "pathway" that always resulted in
> > subjective redness, then this same "pathway" would always produce the
> same
> > redness no matter what brain it was in, right?
>
> Not necessarily. Redness is likely to be learned knowledge and the
> specific neurons and synapses involved would have been stochastically
> trained during development. So I would expect variation between
> different subjects as to the specific neurons involved but relative
> consistency within any given subject over repeated trials suing the
> same color.
>
>
> >> Reframing the "the hard mind-body problem" as the "color problem" does
> >> not help in the slightest because the "color problem" has remained
> >> unsolved for over 300 years and was first voiced by Isaac Newton in
> >> the 17th century.
> >
> > Exactly, and the only reason is, is because everyone has been, to date,
> > qualia blind. Non of the sub camps of RQT can be falsified as long as
> all
> > experimentalists are qualia blind. To me, most of the theories and all
> the
> > religious stuff, are what I think of as 'crap in the gap'. (similar to
> the
> > idea of the God of the Gaps in evolutionary theory) As long as we can't
> > falsify things, people can believe any crap they want to believe. Qualia
> > blind people "correct" for any physical differences observed in the
> brain,
> > labeling it all with the single word 'red'. Again, doing this makes one
> > blind to any different physical qualities they may be detecting. As long
> > as experimentalists continue to do this, they can't falsify any of the
> sub
> > camps to RQT.
>
> Qualia blindness is likely an inescapable state of affairs. Simply
> creating a model that claims the existence of physical qualities does
> not magically grant one the ability to identify or manipulate them nor
> cause them to exist. Qualia are much like the Tao in that the redness
> that can be spoken of, or described in a model, is not the true redness.
>
> Stuart LaForge
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20200312/4a3a48cf/attachment.htm>
More information about the extropy-chat
mailing list