[ExI] Digital Consciousness .

Brent Allsop brent.allsop at canonizer.com
Tue Apr 30 17:37:08 UTC 2013


OK Stathis,



Thankfully, it looks like lots of people are starting to get it.  Spike is
clearly getting it.  And Ben Zaiboc said:



<<<<

Brent Allsop <brent.allsop at canonizer.com> wrote:

> In this idealized theoretical world, it is glutamate that

> has a redness quality.  And this glutamate behaves

> the way it does, because of this redness quality.  ...



> That’s at least how I think about it.  Does this help

> you guys at all?



Yes, it helps enormously.

>>>>



(Ben, thanks for this.  I literally feel to my knees, and cried, when I
read this.)



But Stathis and James are still providing no evidence that they are getting
it at all.  Stathis, you are simply not being creative enough, when you
think it is not possible to have both similar behavior from knowledge with
different qualities, or lack thereof, while at the same time it is not
possible for qualia to fade without noticing.



Another critical problem, at least with your terminology, is that much of
it is not focusing on the right thing.  It is focusing more on picking the
strawberry, than knowing that your knowledge of the strawberry has a
redness quality to it that is very different than the qualitative nature of
your knowledge of the leaf.  Forget about things like “If the visual neuron
perceives red it sends signals to the neurons which make you say ‘I see
red’” and instead focus on the system knowing what its redness experience
is like, and how this is different than greenness.  And think more along
the lines that it is sending a “yes that has a redness quality” because it
knows of the qualitative nature of its knowledge, or at least of the causal
properties of such, whatever that turns out to be.



For you guys that still aren’t getting it, let’s make this so elementary it
is impossible to miss.  Let’s make an even more simplified theoretical
model, and hand hold you through every single step of the transmigration
process, including a final resulting simulated system that can behave the
same.  We’ll describe two of these simplified versions, side by side:  one
with dancing quale, and the other with fading quale.  We’ll describe
exactly what it will be like for the system’s experience as the process
proceeds, including one example of qualia inverting, and one example of
quale "fading".



When we look out at the strawberry patch, we have millions of 3D voxel
elements we are aware of, that can all be represented with any color,
including transparent.  So the model I was talking about before there are
millions of neurons representing each of these voxel elements.  These voxel
neurons can fire with a diverse set of neurotransmitters which each have
the qualities of color we can experience.



All of these millions of voxel neurons are sending their color
neurotransmitters to the single large ‘binding’ neuron.  This single large
binding neuron is a very complicated system, as it enables all these
isolated color voxel elements to be bound together into one unified
phenomenal experience.  In other words, it is doing lots more than just
sending the signal that this red thing is the one we want.  It is also
aware of the qualitative nature of this knowledge and all of their
differences and qualitative diversity, and enables the system to talk about
and think about all this phenomenal diversity.



But let’s simplify all that, and just have two 2d pixel element neurons,
which can only fire either *glutamate*, which behaves the way it does
because of its redness quality, or *dopamine*, which behaves the way it
does because of its greenness quality.



Even though we are now thinking of a very simplified binding neuron, that
only needs to be aware of the qualitative nature of two pixel elements at
the same time, this is a very complex piece of binding machinery.  It
doesn’t just have the ability to say the ‘red’ one is the one we want, it
knows what its red knowledge is qualitatively like, and knows what
greenness is like, and only says it wants the red one, because it has the
redness qualitative experience, which is very different from its greenness
qualitative experience.



Let’s think of the two pixel element neurons differently.  One will be the
reference neuron, the other will be the sample neuron.  The reference
neuron will always be firing with *glutamate*, and the sample pixel will be
compared with this firing reference neuron.



The binding neuron is aware of the qualitative nature of both of them, and
says one is like the other, because it is qualitatively the same, or has
the causal properties of something with a redness quality.



So, the first neuron we want to transmigrate is of course the sample pixel
neuron.  Obviously, since the binding neuron is like a high fidelity *
glutamate* detector, nothing but real *glutamate* will make it say, “yes
that is qualitatively the same as the reference pixel”, because of the fact
that it has the causal properties of redness.



The dancing quale case is quite simple, because we want to replace a pixel
neuron firing with *glutamate*, with one that is firing with *dopamine*.  Or,
if you are a functionalist, you will be replacing the “functional isomorph”
or “functionally active patter” that has the causal properties of redness
with a “functional isomorph” that has the causal properties of a greenness
quality.



The transmigration process describes providing a transducer, which when it
detects something with a greenness property, sends real *glutamate* to the
binding neuron, so the binding neuron can say: yes that has a redness
quality.



In the fading quale case, we are going to use a binary “1” to represent *
glutamate*, and a “0” to represent *dopamine*.  Functionalists tend to miss
a particular fact that they must pay close attention to here.  You must be
very clear about the fact that this “1” which is representing something
that is a “functional isomorph” by definition does not have the same
quality the “functional isomorh” has.  The “1” is only something being
interpreted as abstracted information, which in turn can be interpreted as
representing the *glutamate*, or the functionally isomorphic pattern or
whatever it is that actually has the redness quality.  Obviously, the
transduction layer in this case, must be something for which no matter what
it is that is representing the one as input, when it sees this “1” it
produces real glutamate, so the binding neuron will give the signal: “yes
that has a redness quality”.



OK, so now that the sample neuron has been replaced, and we can switch back
and forth between them with no change, we can now move on to the binding
neuron.  But keep in mind that this one sample neuron could be expanded to
include millions of 3D voxel elements.  All of them are firing with diverse
sets of neurotransmitters which can be mapped to every possible color we
can experience.  And keep in mind the big job this binding neuron has to
do, to bind all this, so it call all be experienced, qualitatively, at the
same time.



In the dancing quale case, we now have to provide the transduction between
the reference neuron, which is still firing with *glutamate*, with
something that converts this to *dopamine*.  So, when the system sees *
dopamine* on both sample, and the reference, it is going to finally say:
“Yes, these are qualitatively the same” and it should finally be blatantly
obvious to everyone, how different this system is when we switch them back
and forth, and even though some naive person may be tempted to believe both
of the “yes they are the same”, before and after the switch, are talking
about ‘red’ knowledge.



The fading quale case is similar.  There is a “1” present on both the
sample and now on the reference, thanks to a new transduction layer between
the pixel producing real glutamate, which enables the virtual neuron to
send a signal that can be thought of as “these are qualitatively the same”
even though everyone should be clear that this is just a lie, or at best an
incorrect interpretation of what the signal really qualitatively means.



So, please return and report, and let me know if I can fall to my knees and
weep yet?


On Tue, Apr 30, 2013 at 5:02 AM, Stathis Papaioannou <stathisp at gmail.com>wrote:

> On Tue, Apr 30, 2013 at 3:50 AM, Brent Allsop
> <brent.allsop at canonizer.com> wrote:
> >
> >
> >
> >
> > Stathis Said:
> >
> > <<<<
> > A volume of neural tissue in the visual cortex is replaced with a
> > black box that reproduces the I/O behaviour at the interface with the
> > surrounding neurons. Do you see how "reproduces the I/O behaviour at
> > the interface with the surrounding neurons" means the subject must
> > behave normally?
> >>>>>
> >
> > Exactly.  But if one box is representing the strawberries and leaves with
> > inverted red green qualities, even though it is behaving exactly the
> same,
> > this qualitative difference is all important to consciousness.
>
> You have said, if I have it correct:
>
> (a) It is possible to reproduce the behaviour of neural tissue with a
> different substrate, but this won't necessarily reproduce the qualia;
> and
> (b) It is not possible to have your qualia change or disappear without
> noticing.
>
> Do you see how these two statements cannot both be true?
>
> > I’ve attempted to describe one final key point, but you guys are
> providing
> > lots of evidence that you still don’t get this one very important thing.
> >
> > This evidence includes when Kelly replied to my saying:
> >
> >
> > <<<<
> > The prediction is, you will not be able to replace any single neuron, or
> > even large sets of neurons that are the neural correlates of a redness
> > quality, without also replacing significant portions of the rest of the
> > system that is aware of what that redness experience is like.
> >>>>>
> >
> > With:
> >
> >
> > <<<<
> > Replacing a single neuron is going to change the qualia of redness?
> Really?
> > You can't replace a single neuron without losing something? You better
> not
> > play soccer, you risk losing your consciousness.
> >
> > Saying something like this undercuts your credibility Brent.
> >
> > You absolutely can replace small parts of the brain without changing how
> the
> > person feels. Ask anyone with a cochlear implant. This is a silly claim.
> >>>>>
> >
> > I also don’t think Stathis is fully understanding this.  The following
> could
> > be evidence for this when he responded to Spike with:
> >
> >
> > <<<<
> > It's difficult if you try to define or explain qualia. If you stick to
> > a minimal operational definition - you know you have an experience
> > when you have it - qualia are stupidly simple. The question is, if a
> > part of your brain is replaced with an electronic component that
> > reproduces the I/O behaviour of the biological tissue (something that
> > engineers can measure and understand), will you continue to have the
> > same experiences or not? If not, that would lead to what Brent has
> > admitted is an absurd situation. Therefore the qualia, whatever the
> > hell they are, must be reproduced if the observable behavior of the
> >
> > neural tissue is reproduced. No equations, but no complex theories of
> > consciousness or attempts to define the ineffable either.
> >>>>>
> >
> > So let me try to explain it in more detail to see if that helps.
> >
> > Let’s just imagine how the transmigration experiment would work in an
> > idealized material property dualism theoretical world, even though
> reality
> > is likely something different and more complex.
> >
> > In this idealized theoretical world, it is glutamate that has a redness
> > quality.  And this glutamate behaves the way it does, because of this
> > redness quality.  Also, imagine that there are multiple other neuro
> > transmitters in this world that are like different color crayons.
>  Brains in
> > this word use these colorful neurotransmitters to paint qualitative
> > conscious knowledge with.
> >
> > In a simplified way. Let’s also imagine that it is a single large neuron
> > that is binding all these synapses representing voxel elements in a 3D
> > space, so we can be aware of all of the colors all at once.  If the
> upstream
> > neurons fire glutamate, for that 3D element, this large binding neuron
> knows
> > there is a redness quality at that particular location in 3D space.  When
> > another upstream neuron fires with another neurotransmitter it will know
> > there is a leaf there, represented with its greenness quality, at the 3D
> > pixel element representing a point on the surface of the leaf.  In other
> > words, all these crayons are just standing alone, unless there is also
> some
> > system that is binding them all together, so we can be aware of all of
> them
> > and their qualitative differences, at one time.
> >
> > When we look at only the behavior of this large binding neuron, and only
> > think of it abstractly, this neuron which is able to tell you whether a
> > particular neuro transmitter has a redness quality or not, will simply
> look
> > like a high fidelity glutamate detector.  Nothing but the glutamate will
> > result in the neuron firing with the ‘yes that is my redness quality’
> > result.
> >
> > Now, within this theoretical world, think of the transmigration process
> when
> > it replaces this one large binding neuron.  Of course the argument admits
> > that the original neuron can’t deal with being presented with ones and
> > zeros.  So, when it replaces the glutamate, with anything else, it
> specifies
> > that you also need to replace the neuron detecting the glutamate, with
> > something else that includes the translation hardware, that is
> interpreting
> > the specific set of ones and zeros that is representing glutamate, as the
> > real thing.  And this virtual neuron only gives the ‘yes that is my
> redness’
> > when this predefined set of ones and zeros is present.
> >
> > In other words, when people think about this transmigration argument of
> > replacing one neuron at a time in this way, they are explicitly throwing
> out
> > and ignoring what is important to the ‘that is real glutamate’ detecting
> > system.  They are ignoring the additional hardware system that is
> required
> > that binds all this stuff together, so we can be aware of redness, at the
> > same time as we are aware of greenness, so we can say, yes they are the
> > same, or no, they are qualitatively different.
> >
> > If a single neuron is what our brain uses to detect glutamate (or
> whatever
> > it is that is the neural correlate of redness), then you can see the
> obvious
> > fallacy in the transmigration thought experiment.  And it is also
> > theoretically possible, that it is more than just a single neuron that is
> > involved in the process of detecting the causal properties of glutamate,
> so
> > that this system only says “That is my redness”, only if it is real
> > glutamate (or whatever it is that really is responsible for a redness
> > quality).  And not until you replace the entire binding system, which is
> the
> > complex process of detecting real glutamate, with an abstracted version
> > which can interpret a specific set of ones and zeros, as if it were the
> same
> > as glutamate, will it finally start behaving the same.  And of course,
> there
> > will be lots of fading quale, as lots of single neurons are placed in
> > between these two states.  Additionally, unlike the real thing, you will
> > never be able to ‘eff’ to know if the abstracted stuff, which is
> something
> > very different than redness, only being interpreted as redness, really
> has a
> > redness experience – unlike the real thing.
> >
> > That’s at least how I think about it.  Does this help you guys at all?
>
> The big neuron detecting the multiple neurotransmitters sends signals
> to downstream neurons. For example, it sends signals to motor neurons
> responsible for speech. If the visual neuron perceives red it sends
> signals to the neurons which make you say "I see red" and if it
> perceives green (which could be due in your model to, say, dopamine)
> it sends signals to the neurons which make you say "I see green". Now,
> a foolish engineer, who knows nothing about qualia, observes this and
> replaces your visual neuron with a black box which can tell the
> difference between the two neurotransmitters. If  the black box
> detects glutamate it will stimulate the neurons that make you say "I
> see red" while if it detects dopamine it will stimulate the neurons
> that make you say "I see green". But this black box produces no visual
> qualia at all. So the result is a person who is blind but describes
> things normally, believes he can see normally, and tells everyone he
> can see normally... which you have agreed is absurd.
>
> I imagine now you will say that the black box cannot send the right
> signals to the motor neurons unless it truly does perceive colours,
> but that goes against the initial assumption, which is that the
> *externally observable behaviour* of neural tissue is computable. It
> is generally thought that chemistry is computable, but even if it
> isn't, the case against the substrate-dependence of consciousness is
> upheld by simply leaving out the detail that the artificial neurons
> are computerised.
>
>
> --
> Stathis Papaioannou
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20130430/57510674/attachment.html>


More information about the extropy-chat mailing list