[ExI] Qualia blind thinking (Was re: Uploads are self)
Jason Resch
jasonresch at gmail.com
Tue Mar 17 22:48:05 UTC 2026
On Tue, Mar 17, 2026, 5:05 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> On Tue, Mar 17, 2026 at 2:25 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Tue, Mar 17, 2026, 1:28 PM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> On Tue, Mar 17, 2026 at 11:19 AM Jason Resch via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> On Tue, Mar 17, 2026, 12:33 PM Brent Allsop via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>>
>>>>> To me, all this talk is so completely objectivbely observable qualia
>>>>> blind, and ignoring what consciousness is (and how half of your
>>>>> consciousness is in the left hemisphere, and othe in the right.)
>>>>>
>>>>> This statement was in Jason's essay:
>>>>>
>>>>> "The reason is that empirical science, being that which is practiced
>>>>> by way of objective experiments, cannot answer these questions in a
>>>>> satisfactory way. This remains true no matter how advanced technology
>>>>> becomes in the future."
>>>>>
>>>>
>>>>
>>>> I should highlight that this statement in particular is unrelated to
>>>> understanding qualia. Here I was writing only about the question of whether
>>>> another mind subjectively survives an upload or if they subjectively die.
>>>>
>>>> There are personal subjective experiments you can perform to verify you
>>>> do indeed survive (assuming you do). But there's no objective test another
>>>> can perform to decide this question,
>>>>
>>>> Note that this does not rule out the sorts of personal subjective
>>>> qualia experimentation that you advocate for.
>>>>
>>>>
>>>>> And Clark constantly makes similar statements all the time. But to
>>>>> me, this is evidence of how corrupting the neuro substitution argument
>>>>> (fallacy) is. Why would you give up faith and hope for consciousness being
>>>>> fully approachable via science?
>>>>>
>>>>
>>>> I don't, but there are certain classes of questions, like the problem
>>>> of other minds, the question of the reality of the experienced world,
>>>> questions of subjective survival, which can't be decided by empirical
>>>> (objective) tests.
>>>>
>>>> Do you acknowledge the limits of empiricism for these particular
>>>> questions?
>>>>
>>>
>>>
>>> I do not acknowledge this. I have faith and hope that we are already
>>> fully observing the subjective mind.
>>>
>>
>> But this is an entirely separate question from the types of questions I
>> identified as being beyond objective empirical experiments.
>>
>> If you think that no experiment is beyond objective science, how would
>> you propose testing the hypothesis that another's subjective identity has
>> survived a destructive mind upload or destructive teletransportation using
>> new matter to assemble the person at the new location? What exactly would
>> you be testing for? What experiment would reveal the person survived vs. a
>> new clone was created?
>>
>
> I'm just saying that once we demonstrate which of all our descriptions of
> stuff in the brain is a description of elemental redness, and how these
> elemental qualities are subjectively bound into composit gestault
> experineces, it will be obvious how all this stuff works, why the neuro
> substitutuion experiment will fail (the case chalmers argues is a
> possobility)
>
The only way this can realistically fail is if the brain is using something
uncomputable. Otherwise any behavior the brain manifests (at any layer) is
something that an appropriately programmed machine can replicate. If you
think glutamate molecules are important, then the computer can simulate
glutamate molecules and reproduce the same neuronal behavior when neurons
interact with glutamate. If someone else thinks electrons and quantum
fields are important, then we can have the computer simulate electrons and
quantum fields.
The only out, if you want to escape the consequences of Chalmers/Zuboff's
replacement arguments, is to deny the possibility that a synthetic
replacement could replicate the behavioral functions of the biological
machinery. But again this requires the biological machinery to escape what
is Turing machine emulable.
when whatever is presenting to the binding system, you won't be able to
> progress to the next neuron because nothing but glutamate will have a
> redness quality.
>
What does this redness quality do to the neuron that receives it? As I see
it, the only thing that matters to a neuron is how it's propensity to fire
or not fire will change. And that is all that matters to the next
downstream neuron, and so forth. I don't see what effect a purely
epiphenomenal redness quality can have on anything. And if it isn't
epiphenomenal, then it has causal properties that can be replicated by a
machine.
We will know things like some animals, like maybe a particular neuro
> engineered bat uses my redness quality to represent echolocated bugs/food
> with, so we will reliably know what it is like. (i.e. it will never fail)
>
Only if your intrinsicist theory of consciousness turns out to be true. But
I don't see how this theory could be proven. For even if both bat brains
and human brains use glutamate and even if we established glutamate
reliably maps to redness in humans, how could we extrapolate from this that
glutamate maps to redness in bat brains? How could we establish that no
other thing besides glutamate can generate the redness experience? How do
we even know two humans mean the same thing when they say red? You say a
pony tail can establish this, but I'm not so sure. The Hogan twins have a
connected mind, and can see through each other's eyes, but it seems
possible to me each could have a different subjective quality for red.
> Just like I don't need to go to space to know that the laws of physics
> apply off earth. They just always do, and that is all that is requireed to
> engineer space robots and all that.
>
What makes it hard is we can't "travel into other people's minds" to verify
any sort of equality or commonality between two subjects's subjective
experiences.
>
>
>
>> The only thing we don't yet know are things like which of all our
>>> descriptions of stuff in the brain is a description of redness. Once we
>>> make that connection, it will all make sense, and we will know in 3
>>> different ways, what it is like for other brains. (see: Three types
>>> of effing the ienffable
>>> <https://docs.google.com/document/d/1JKwACeT3b1bta1M78wZ3H2vWkjGxwZ46OHSySYRWATs/edit?tab=t.0>
>>> )
>>>
>>
>> Already we have the technology where you could show a picture of a red
>> fish to a multimodal AI, ask it "what color is the fish in this picture?",
>> then trace the causality of all the neural activations that ultimately
>> leads to the AI saying "The fish is red."
>>
>
> If that were true, we would already know which of all our descriptions of
> stuff in the brain is a description of redness.
>
Yes, I think we could. Though there's no guarantee that multi modal LLMs
mean the same thing as we do when they say "red."
There are even popular results where fMRIs observe all this in people's
> brains, and they render colored images of the knowledge being observed in
> the brain, as you describe. However, you will notice that the colors
> rendered from what is observed is alaways mapped in some way back to light
> wavelengths. In other words, if a person has inverted red/green qualities,
> it would miss the difference, and map both of those different qualities to
> the same redness.
>
> We currently have the ability to produce white 'sprites' in brains of
> subjects with directneuro stimulation (ala cortical neuro visula
> prosthetics). And on rare occasions, we can get people to experience color
> qualities. But science clearely doesn't know what is responsible for which
> coloro qualities, nor how to reliably produce colors.
>
I think it's just a technical limitation around our inability to reliably
stimulate single neurons, and the electrical stimulus we provide stimulates
a bunch of adjacent neurons at once which confounds the result.
My belief is we won't find any special difference among the neurons
themselves, between the neurons that reliably trigger green experiences
when stimulated vs. the ones that reliably trigger red experiences. The
only important difference between then will be where they stand within the
neural network, and the different functional relations that follow when one
of those is stimulated vs. another.
In other words, this is a hot area of research, seeking to know how to
> produce colored experiences, even if only to be able to improve cortial
> neural visual prostheses to be colored. This is my area of interest, and
> the people doing reserch in this field is where I think the next
> breakthrough in consciousness understanding will be achieved. In other
> word, I predict the people doing this reasearac whill be the first one to
> discover which fo all our descriptions of stuff in the brain is a
> description of redness, and that this will be considred the greatest
> scientific discovery of all time. This is where I ams spending my research
> efforts these days, and it is very interesting.
>
I agree this research in understanding consciousness is perhaps the most
important kind of research there is.
Jason
>
>> But what would this test show (or not show) that a similar causal chain
>> analysis of neurons in a human brain would show?
>>
>> I think both tests would leave us either equally informed or equally
>> dumbfounded. I don't see one test leading to some surprising answer. I
>> think we would simply uncover the causal algorithms and functional patterns
>> employed by the neural networks in both cases to build up a complex
>> discriminated information state, and then tonfurther process the words to
>> extract the required information from this discriminated state, yielding
>> the answer "The fish is red." in both situations.
>>
>> What else could it be?
>>
>>
>>> Oh, and did I mention that neuro ponytails will disprove solipsism, and
>>> theories like we are brains in vats?
>>>
>>
>> I've seen you claim that, but I disagree. Since in all cases and at all
>> times, you can only ever be aware of "a single conscious state," at any
>> particular time, even if that state happens to be one that includes data
>> from multiple sensory systems at once.
>>
>> So while the mind merging might give the strong intuition that you are
>> connected to another mind, that impression doesn't prove there is indeed
>> another mind rather than the simulation/vat deciding to give you this
>> illusory experience.
>>
>>
>>
>>>
>>>> I added a statement to this effect, quoting the above statement, in the
>>>>> highest-level super camp "Approachable via Science."
>>>>>
>>>>> https://canonizer.com/topic/88-Theories-of-Consciousness/2-Approachable-Via-Science?is_tree_open=0&asof=review
>>>>>
>>>>> You guys are completely ignoring the fact that in the near future we
>>>>> will be doing very significant neurohacking and re-engineering of our
>>>>> brain.
>>>>>
>>>>
>>>> I acknowledge the utility of such experiments. However I reserve some
>>>> doubt they they will enable arbitrary minds to understand arbitrary qualia.
>>>> For I think the mind in question defined the set of qualia accessible to it.
>>>>
>>>>
>>>> One minor example is that most of us are trichromats, while others are
>>>>> tetrachromats, and some of us suffer from achromatopsia and experience no
>>>>> color qualities. Surely in the near future we will be able to fix issues
>>>>> like this and completely redesign our color knowledge to include 10, or
>>>>> perhaps even one hundred, primary color qualities that no human has
>>>>> experienced before.
>>>>>
>>>>
>>>> Yes, I agree with that.
>>>>
>>>> And we will be able to freely choose what qualities we use to
>>>>> represent what wavelengths of light on a whm. To say nothing about being
>>>>> able to increase the phenomenal resolution of our visual knowledge by
>>>>> thousands of times in both our current brains and in any avatar brain we
>>>>> might choose to do subjectivee mind merging with, similar to the way the
>>>>> left hemisphere is subjectively mergeed with the right.
>>>>>
>>>>
>>>>
>>>> But note that by modifying the brain in the manner you suppose, you are
>>>> always creating a new mind which will have knowledge of the way some things
>>>> are to it, but it can never simultaneously hold the way some things are to
>>>> others who are not it. I don't see any way around this purely logical
>>>> restriction. Any given vantage point will always see some things, but not
>>>> others.
>>>>
>>>>
>>> See my other post in this chain where I refer to the youtube short where
>>> we'll be able to upgrade half, or small portions of our consciousness, to
>>> test them out, before we go full blown upgrade, and if we really want to,
>>> we'll be able to mind meld to previous copies of ourself, for nastalgia
>>> desires), to see how terrible consciousness is now, compared to what it
>>> will soon be like.
>>>
>>
>>
>> There are cases where a more complex mind can experience the qualia of a
>> less complex mind, when the lower dimensional qualitiaive state exists as a
>> point within the higher dimensional qualitative space. For example, both a
>> color sighted person and a color blind person can both experience a
>> monochromatic visual scene.
>>
>> But there are some qualitative states that don't commute, because they
>> don't belong to the same space. For example, no matter how many primary
>> colors one adds, no color translates to the taste of chocolate or smell of
>> cinnamon.
>>
>> Or consider the brain of an ant, it simply isn't capable of realizing the
>> highly complex information states that a human brain can realize, owing to
>> its comparative paucity of neurons. If you linked an ant brain and a human
>> brain with such a ponytail, what could the ant brain know of the human mind?
>>
>> Then consider incompatible qualitiaive states, like enjoying the smell of
>> gasoline vs. hating the smell of gasoline. Can the same mind hold these two
>> mutually inconsistent qualitative perceptions simultaneously? Or can it
>> only hold one such qualitative state at a time, hence each state is
>> unknowable to the other mind at the other time?
>>
>> I share you desire for answers but my optimism is tempered by problems
>> such as these.
>>
>> Jason
>>
>>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260317/214be0ff/attachment.htm>
More information about the extropy-chat
mailing list