[ExI] Digital Consciousness .

Brent Allsop brent.allsop at canonizer.com
Mon Apr 29 17:50:05 UTC 2013


Stathis Said:

<<<<
A volume of neural tissue in the visual cortex is replaced with a
black box that reproduces the I/O behaviour at the interface with the
surrounding neurons. Do you see how "reproduces the I/O behaviour at
the interface with the surrounding neurons" means the subject must
behave normally?
>>>>

Exactly.  But if one box is representing the strawberries and leaves with
inverted red green qualities, even though it is behaving exactly the same,
this qualitative difference is all important to consciousness.


I’ve attempted to describe one final key point, but you guys are providing
lots of evidence that you still don’t get this one very important thing.

This evidence includes when Kelly replied to my saying:

<<<<
The prediction is, you will not be able to replace any single neuron, or
even large sets of neurons that are the neural correlates of a redness
quality, without also replacing significant portions of the rest of the
system that is aware of what that redness experience is like.
>>>>

With:

<<<<
Replacing a single neuron is going to change the qualia of redness? Really?
You can't replace a single neuron without losing something? You better not
play soccer, you risk losing your consciousness.

Saying something like this undercuts your credibility Brent.

You absolutely can replace small parts of the brain without changing how
the person feels. Ask anyone with a cochlear implant. This is a silly claim.
>>>>

I also don’t think Stathis is fully understanding this.  The following
could be evidence for this when he responded to Spike with:

<<<<
It's difficult if you try to define or explain qualia. If you stick to
a minimal operational definition - you know you have an experience
when you have it - qualia are stupidly simple. The question is, if a
part of your brain is replaced with an electronic component that
reproduces the I/O behaviour of the biological tissue (something that
engineers can measure and understand), will you continue to have the
same experiences or not? If not, that would lead to what Brent has
admitted is an absurd situation. Therefore the qualia, whatever the
hell they are, must be reproduced if the observable behavior of the
neural tissue is reproduced. No equations, but no complex theories of
consciousness or attempts to define the ineffable either.
>>>>

So let me try to explain it in more detail to see if that helps.

Let’s just imagine how the transmigration experiment would work in an
idealized material property dualism theoretical world, even though reality
is likely something different and more complex.

In this idealized theoretical world, it is glutamate that has a redness
quality.  And this glutamate behaves the way it does, because of this
redness quality.  Also, imagine that there are multiple other neuro
transmitters in this world that are like different color crayons.  Brains
in this word use these colorful neurotransmitters to paint qualitative
conscious knowledge with.

In a simplified way. Let’s also imagine that it is a single large neuron
that is binding all these synapses representing voxel elements in a 3D
space, so we can be aware of all of the colors all at once.  If the
upstream neurons fire glutamate, for that 3D element, this large binding
neuron knows there is a redness quality at that particular location in 3D
space.  When another upstream neuron fires with another neurotransmitter it
will know there is a leaf there, represented with its greenness quality, at
the 3D pixel element representing a point on the surface of the leaf.  In
other words, all these crayons are just standing alone, unless there is
also some system that is binding them all together, so we can be aware of
all of them and their qualitative differences, at one time.

When we look at only the behavior of this large binding neuron, and only
think of it abstractly, this neuron which is able to tell you whether a
particular neuro transmitter has a redness quality or not, will simply look
like a high fidelity glutamate detector.  Nothing but the glutamate will
result in the neuron firing with the ‘yes that is my redness quality’
result.

Now, within this theoretical world, think of the transmigration process
when it replaces this one large binding neuron.  Of course the argument
admits that the original neuron can’t deal with being presented with ones
and zeros.  So, when it replaces the glutamate, with anything else, it
specifies that you also need to replace the neuron detecting the glutamate,
with something else that includes the translation hardware, that is
interpreting the specific set of ones and zeros that is representing
glutamate, as the real thing.  And this virtual neuron only gives the ‘yes
that is my redness’ when this predefined set of ones and zeros is present.

In other words, when people think about this transmigration argument of
replacing one neuron at a time in this way, they are explicitly throwing
out and ignoring what is important to the ‘that is real glutamate’
detecting system.  They are ignoring the additional hardware system that is
required that binds all this stuff together, so we can be aware of redness,
at the same time as we are aware of greenness, so we can say, yes they are
the same, or no, they are qualitatively different.

If a single neuron is what our brain uses to detect glutamate (or whatever
it is that is the neural correlate of redness), then you can see the
obvious fallacy in the transmigration thought experiment.  And it is also
theoretically possible, that it is more than just a single neuron that is
involved in the process of detecting the causal properties of glutamate, so
that this system only says “That is my redness”, only if it is real
glutamate (or whatever it is that really is responsible for a redness
quality).  And not until you replace the entire binding system, which is
the complex process of detecting real glutamate, with an abstracted version
which can interpret a specific set of ones and zeros, as if it were the
same as glutamate, will it finally start behaving the same.  And of course,
there will be lots of fading quale, as lots of single neurons are placed in
between these two states.  Additionally, unlike the real thing, you will
never be able to ‘eff’ to know if the abstracted stuff, which is something
very different than redness, only being interpreted as redness, really has
a redness experience – unlike the real thing.

That’s at least how I think about it.  Does this help you guys at all?

Brent Allsop



On Sun, Apr 28, 2013 at 11:54 PM, Stathis Papaioannou <stathisp at gmail.com>wrote:

> On Mon, Apr 29, 2013 at 3:25 AM, Brent Allsop
> <brent.allsop at canonizer.com> wrote:
>
> > Hi Stathis,
> >
> > Thanks for putting so much effort towards this, and I apologize that I
> > still, despite significant progress thanks to everyone's help, have so
> much
> > difficultly communicating.
> >
> > Yes, I don't believe that "qualia can fade without you noticing" and
> that it
> > will not be possible for you to notice, without changing your behavior,
> or
> > any other way qualitative natures are disconnected form consciousness,
> and
> > it's underlying neural correlates.
>
> Good.
>
> > You still believe that my real problem is that I still don't understand
> why
> > our behavior can't change due to the replacement.  I fully understand all
> > this, and you're still jumping back to a straw man that I also do not
> accept
> > and agree with you.  There is yet another option that you aren't fully
> > getting yet, that is not anything like any of these epiphenomal qualities
> > that are disconnected from reality.  In this way, the qualities are very
> > real, and they have very real causal properties.  These causal properties
> > are properties we already know, abstractly, all about it's behavior.  We
> > just don't know about it qualitative nature.  We will think the system is
> > detecting glutamate, merely because of it's causal behavior, when in
> fact,
> > it is detecting it because of the qualitative nature, of which the causal
> > behavior is a mere symptom, and all we know are abstracted
> interpretations
> > of the same.
> >
> > Let me try to put it this way.  James has admitted that an abstracted
> > representation of the causal properties of glutamate is just something
> very
> > different than the causal properties of glutamate, configured in a way so
> > that these very different causal properties, can be interpreted as real
> > glutamate.  In other words, he has admitted that the map is not like the
> > territory, other than it can be interpreted as such.  Do you agree with
> > that?  And if you do, is it not a very real theoretical possibility, that
> > the reason glutamate is behaving the way that it is, is because of it's
> > redness quality.  And also is not a very real possibility that the real
> > glutamate has this ineffable quality (blind to abstracting observation or
> > requires mapping back to the real thing) for which the abstracted
> > representation of the same, though it can be interpreted as having it,
> > doesn't really have it.
>
> A volume of neural tissue in the visual cortex is replaced with a
> black box that reproduces the I/O behaviour at the interface with the
> surrounding neurons. Do you see how "reproduces the I/O behaviour at
> the interface with the surrounding neurons" means the subject must
> behave normally?
>
>
> --
> Stathis Papaioannou
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20130429/8caf0ba3/attachment.html>


More information about the extropy-chat mailing list