[ExI] Digital Consciousness .

Brent Allsop brent.allsop at canonizer.com
Wed May 1 20:07:00 UTC 2013


Hi James

Thanks for this educational response, and for pointing out the typo.  I was
really troubling over understanding that, till I started thinking it might
be a typo, and finally saw your post that it was a typo.

Some of this was surprising, and it's frustrating that you work so hard to
try escape from the theoretical cage I'm trying to box you in with, in
order to show there is no hard problem here.  It seems that instead of
trying to understand what I'm saying, you just try to find some way
(sometimes mistaken ways, as I'll attempt to show) to squeeze out of some
not yet rigorously defined term, or something, and thereby attempt to
justify your theory or assertion that there is a hard problem here.

>From what you've said, we still obviously need to further simplify, and
more rigorously define, what we/I mean by this "binding neuron".  I tried
to point out that it was likely much more than just one neuron, but this
seemed to be completely missed, so how about we call it a "binding system",
instead of "binding neuron"?  What I'm trying to illustrate is that leaving
out the entire binding systems in this substitution experiment is thinking
about it at the wrong level, and what makes this thought experiment a
fallacy.  You fail to see the forest, but only the trees.

So let's just limit this binding systems mechanistic functionality to be
indicating whether the reference knowledge is qualitatively the same as the
sample knowledge.  An additional requirement is that it make this
determination only when the knowledge being compared both have qualitative
properties AND that the qualitative properties (or causal or informational
properties of the qualities) are the same.  If something doesn't meet these
requirements, it is not what is important, by definition.

You said:

<<<<
Let me see if I can break this down for you. Here are the neurons you have:
>>>>

Then you started out great, but as soon as you got to the important part,
you jumped to the fading quale case, presenting a "1" or a "0" to the
abstracted or simulated version of the binding system.  Then, as if it
mattered, you correctly concluded:

<<<<
Now, if we assume MPD is true, then we have a problem, because this new
system should have no real qualia, but it CLAIMS that it is experiencing
real qualia the entire time, as its neurons were slowly replaced with
simulations. And the result is a theory where the qualia is epiphenomenal.
>>>>

I fully admit, and agree with you, that once the entire binding system is
replaced with an abstracted version that it can be thought of as acting
like the "1" is real glutamate, and the redness quality.  But the
conclusion you are drawing from this is entirely missing the point of what
I'm talking about.

By definition, a "1", and a "0", do not have qualities (Why I didn't color
them red and green like you did).  By definition, they are not glutamate,
nor are they any kind of "functional isomorphs" or any other theoretical
thing that anyone may propose could theoretically have the quality we are
trying to test for.  So, by definition, the virtualized replacement of the
binding system, is not doing what we want it to do, and is only being
thought of as doing it.  All it is, is some configuration of some arbitrary
matter, which, by design, doesn't matter if it has qualities or not, but is
only being thought of, whatever arbitrary thing it is, as being a
comparator of a "1" and "0", which by definition do not have a redness or
greenness quality.  You could invert or replace the abstract machinery, in
an infinitely many different ways that can be thought of as behaving the
same way, which were all very different, and regardless of what you were
using, and regardless of how inverted the fundamental stuff was, as long as
you thought of its current particular arbitrary configuration, as a
comparator between a "1" and a "0", that is all it would be, is something
you are thinking of as if it were something qualitatively, very different
from what it really is.

In other words, the fallacy in the substitution argument is, when you, in
one single step, replace the very thing that is dong the detection,
binding, and comparison of the phenomenal qualities; (i.e. the binding
system) with something that by definition and design has nothing to do with
qualities, even though you can think of the resulting abstracted behavior
as the behavior you want, you are completely bypassing and ignoring what is
important.

Also, as you've pointed out, it might be possible for some religious person
to theorize about the state of things, once you are way passed any of the
fading/dancing quale partially replaced states, and the entire binding
system has been replaced with something that has none of that and is only
being thought of as having it.  It might then be possible to theorize that
a qualitative experience is still occurring.  The problem is, as you
correctly point out, this could never be validated, or proven, since there
is, by definition, no causal evidence for any such 'epiphenomena'.  Your
conclusion is true, but only about this kind of non causal epiphenomena,
and has nothing to do with what this theory is predicting.

This theory is predicting that real glutamate (or some real functionally
active pattern, or whatever) which, if it is demonstrated to be what has a
redness quality, that will be reliably qualitatively demonstrable to all
such real "binding systems" in all brains in various never failing week and
strong causal ways.

In other words, if the demonstrable science performs qualitatively and
causally, as is being predicted here, the fallacy making people think there
is an epiphenomena hard problem can be demonstrably exposed, and it can be
reliably demonstrated that qualities really do have detectable causality
which can be objectively shared, in various week and strong ways that
aren't so hard after all.

Brent Allsop




On Tue, Apr 30, 2013 at 12:29 PM, James Carroll <jlcarroll at gmail.com> wrote:

> On Tue, Apr 30, 2013 at 11:37 AM, Brent Allsop <brent.allsop at canonizer.com
> > wrote:
>
>> But Stathis and James are still providing no evidence that they are
>> getting it at all.
>>
>
> Obviously, I think that it is clearly you who aren't getting it at all.
>
>
> ...For you guys that still aren’t getting it, let’s make this so
>> elementary it is impossible to miss.  Let’s make an even more simplified
>> theoretical model, and hand hold you through every single step of the
>> transmigration process, including a final resulting simulated system that
>> can behave the same.
>>
>
> Which is funny, since you clearly didn't get it, even in this simplified
> handheld case.
>
>
>
>> All of these millions of voxel neurons are sending their color
>> neurotransmitters to the single large ‘binding’ neuron.  This single
>> large binding neuron is a very complicated system, as it enables all these
>> isolated color voxel elements to be bound together into one unified
>> phenomenal experience.  In other words, it is doing lots more than just
>> sending the signal that this red thing is the one we want.  It is also
>> aware
>>
>
> HOW is it "aware" of anything. It is just one neuron. How does it
> "represent" this awareness internally? Yes it GETs one transmitter or
> another as input, but how does it INTERNALLY represent all these things
> that you claim that this one neuron is "aware" of?
>
>
>> of the qualitative nature of this knowledge and all of their differences
>> and qualitative diversity, and enables the system to talk about and think
>> about all this phenomenal diversity.
>>
>
>
> How does it experience phenomenal anything, when its internal state is
> ONLY impacted by the CAUSAL properties of glutamate or dopamine?
>
>
>
>> So, the first neuron we want to transmigrate is of course the sample
>> pixel neuron.  Obviously, since the binding neuron is like a high
>> fidelity *glutamate* detector, nothing but real *glutamate* will make it
>> say, “yes that is qualitatively the same as the reference pixel”, because
>> of the fact that it has the causal properties of redness.
>>
>>
>
> With you so far....
>
>
>
>> The dancing quale case is quite simple, because we want to replace a
>> pixel neuron firing with *glutamate*, with one that is firing with *
>> dopamine*.  Or, if you are a functionalist, you will be replacing the
>> “functional isomorph” or “functionally active patter” that has the causal
>> properties of redness with a “functional isomorph” that has the causal
>> properties of a greenness quality.
>>
>>
>>
>> The transmigration process describes providing a transducer, which when
>> it detects something with a greenness property, sends real *glutamate*to the binding neuron, so the binding neuron can say: yes that has a
>> redness quality.
>>
>
>
> Yes. Again, with you so far. You now have a neuron with dopamine, that
> causes your binding neuron to think it is seeing glutamate, through a
> translation (intperpretation) layer, that replaces the dopamine with
> glutamate for the binding neuron... excellent.
>
>
> But where you fail to take the leap, is when you replace the proposed
> binding neuron itself. Then, the middleware translation layer
> can disappear, and you can invert the outputs of the binding neuron itself
> instead. That is where your example falls down. You don't think carefully
> enough about what happens when you replace your theoretical "binding"
> neuron itself with a simulation, or with an inverted system. If you do
> that, then you have a binding neuron, that is experiencing dopamine, but
> that causes you to ACT as if the original binding neuron had seen
> glutamate.
>
>
>
>
>>  In the fading quale case, we are going to use a binary “1” to represent
>> *glutamate*, and a “0” to represent *dopamine*.  Functionalists tend to
>> miss a particular fact that they must pay close attention to here.  You
>> must be very clear about the fact that this “1” which is representing
>> something that is a “functional isomorph” by definition does not have the
>> same quality the “functional isomorh” has.  The “1” is only something
>> being interpreted as abstracted information, which in turn can be
>> interpreted as representing the *glutamate*, or the functionally
>> isomorphic pattern or whatever it is that actually has the redness quality.
>> Obviously, the transduction layer in this case, must be something for
>> which no matter what it is that is representing the one as input, when it
>> sees this “1” it produces real glutamate, so the binding neuron will give
>> the signal: “yes that has a redness quality”.
>>
>
>
> Again, correct when you simulate (and appropriately translate) the
> behavior of the sample neuron. You do this part right.
>
>
>
>
>>  OK, so now that the sample neuron has been replaced, and we can switch
>> back and forth between them with no change, we can now move on to the
>> binding neuron.  But keep in mind that this one sample neuron could be
>> expanded to include millions of 3D voxel elements.  All of them are
>> firing with diverse sets of neurotransmitters which can be mapped to every
>> possible color we can experience.  And keep in mind the big job this
>> binding neuron has to do, to bind all this, so it call all be experienced,
>> qualitatively, at the same time.
>>
>>
>>
>> In the dancing quale case, we now have to provide the transduction
>> between the reference neuron, which is still firing with *glutamate*,
>> with something that converts this to *dopamine*.  So, when the system
>> sees *dopamine* on both sample, and the reference, it is going to
>> finally say: “Yes, these are qualitatively the same” and it should finally
>> be blatantly obvious to everyone, how different this system is when we
>> switch them back and forth, and even though some naive person may be
>> tempted to believe both of the “yes they are the same”, before and after
>> the switch, are talking about ‘red’ knowledge.
>>
>
>
> No Brent, it's not obvious at all, and this is where you make your most
> obvious mistake.
>
> Let me see if I can break this down for you. Here are the neurons you have:
>
> Sample
> Reference
> Binding
> Downsream (where downstream refers to the neurons that the binding neuron
> talks to, and tells about its experiences).
>
> The connections between these neurons are as follows:
>
> S:B sample to binding
> R:B reference to binding
> B:D binding to downstream...
>
> Ok, so, you started inverting things, and you inverted the sample. You had
> to then translate between S:B, obviously, so that B still got glutamate
> instead of dopamine. The pattern here, is that you must translate between
> every inverted neuron, and every neuron it talks to.
>
> Next, you propose inverting the binding neuron. But what you seem to have
> missed is that when you do that, you have to translate between the inverted
> parts, and the non inverted parts.
>
> Sample (inverted)
> Reference (inverted)
> Binding
> Downsream (where downstream refers to the neurons that the binding neuron
> talks to, and tells about its experiences).
>
> S:B sample to binding (must be translated)
> R:B reference to binding (can be left alone)
> B:D binding to downstream... (must be translated)
>
> NOW, it's not at ALL obvious that the individual actually experiences
> anything different, after all, because of the translation between the
> binding neuron and the downstream neurons, the person SAYS that their
> experiences haven't changed at all. But you are proposing that their
> experiences really HAVE changed... thus, you are proposing a theory that
> results in epiphenomenal qualia, whether you know it or not.
>
>
>> The fading quale case is similar.  There is a “1” present on both the
>> sample and now on the reference, thanks to a new transduction layer between
>> the pixel producing real glutamate, which enables the virtual neuron to
>> send a signal that can be thought of as “these are qualitatively the same”
>> even though everyone should be clear that this is just a lie, or at best an
>> incorrect interpretation of what the signal really qualitatively means.
>>
>
> Ummm, no... let's let blue = natural, and black = simulated/translated.
>
> Step 1, no simulation:
>
> Sample
> Reference
> Binding
> Downsream
>
> S:B sample to binding
> R:B reference to binding
> B:D binding to downstream...
>
> Step 2, simulate sample neuron:
>
> Sample (simulated)
> Reference
> Binding
> Downsream
>
> S:B (translated)
> R:B
> B:D
>
> The translation at this point is simple, when the S sends a 1, the
> translation sends glutamate to B, when S sends a 0, the translation layer
> sends dopamine to B. So far so good, right? B behaves JUST as it did
> before, because it is unaware of the simulation happening downstream, so it
> sends all the same signals upstream... with me so far?
>
> Ok, so,now, let's simulate S and B, ok?
>
> Step 3, simulate Sample and Binding Neurons.
>
> Sample (simulated)
> Reference
> Binding
> Downsream
>
> S:B (un translated, but simulated)
> R:B (translated)
> B:D (translated)
>
> Now, notice, that the S:B link is no longer translated, it is just
> simulated such that the simulation of B does the right thing depending on
> what S was. But the R:B link must be translated. This translation goes much
> like the S:B link did when we simulated S. But now the natural neuron is on
> the other side of the translation, so it simply goes the other direction.
> When the R neuron sends glutamate to B, a detector detects the glutamate,
> and sends a 1 to the simulated B, which then behaves (in simulation) just
> as it would if it had seen real glutamate. When the R neuron tries to
> send dopamine to B, a detector picks up the dopamine, and sends a 0 to
> the simulated B, which then behaves (in simulation) exactly like the
> natural B would have if it had seen real dopamine coming from R. All that
> is left is to describe the B:D simulation layer, which is hard to do since
> you didn't describe how B talks downstream, but however it does it, you
> simulate what it does, and then translate, so all the downstream neurons
> see the same real neurotransmitters that they saw before.
>
> Now, if you simulated R too, you end up with a system with no glutamate or
> dopamine in this part of the system, but that CLAIMS to still be
> experiencing qualia, and why? Because the downstream neurons all behave
> exactly as they did before the swap.
>
> Now, if we assume MPD is true, then we have a problem, because this new
> system should have no real qualia, but it CLAIMS that it is experiencing
> real qualia the entire time, as its neurons were slowly replaced with
> simulations. And the result is a theory where the qualia is epiphenomenal.
>
> Thus, MPD is dead.
>
>
>  So, please return and report, and let me know if I can fall to my knees
>> and weep yet?
>>
>
>
> I sincerely hope so. I hope that you have finally got it.
>
> James
>
> --
> Web: http://james.jlcarroll.net
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20130501/9795a079/attachment.html>


More information about the extropy-chat mailing list