[ExI] Do digital computers feel?

Jason Resch jasonresch at gmail.com
Thu Dec 29 13:29:33 UTC 2016


The software of a self driving car can differentiate a red light from a
green light. It's high level functions know when it is seeing a red light
it should stop, and when it sees a green light it can proceed. The high
level part of the program understands there is a fundamental difference
between these two states, and that they are exclusive: it should never
expect to see a simultaneous red-green state. Millions of bytes of raw
pixel data were distilled down to this binary sensation, which puts the
driving software into states of different feelings: "the sensation of
needing to stop" and the "the sensation of wanting to go". If we added the
ability to speak in english to this high level driving software, we could
ask it to describe the difference between red and green lights, but it
wouldn't be able to describe it any differently than in the terms of how it
makes it feel, since the high level part of the program doesn't have access
to the low level raw pixel data.

It is thought that the brain is similarly organized, Fodor's Modularity of
Mind is an example. In this idea, the brain has many specialized modules,
which take a lot of inputs and produce a simplified output shared with
other regions of the brain. We experience this, rather than redness as the
frequent action potentials of neurons connected to red-sensing cones in our
retina, just as self driving cars perceive only the need to stop or the
need to go, rather than the RGB values collected by its cameras.

All this goes to say, you can't explain the experience of red without
explaining a good part of your brain and how the experience effects all the
other parts of your brain. Quale aren't simple, they are extraordinary
complex.

Jason

On Wed, Dec 28, 2016 at 10:27 PM, Brent Allsop <brent.allsop at gmail.com>
wrote:

>
>
> On 12/23/2016 12:37 AM, Stathis Papaioannou wrote:
>
>
> On 23 Dec. 2016, at 3:44 pm, Brent Allsop <brent.allsop at gmail.com> wrote:
>
> Hi Stathis,
>
>
> Hmmm, I'm having troubles understanding what you are saying.  You seem to
> be not understanding what I am trying to say as in no place did I intend to
> say that any functionally equivalent neurons would behave differently when
> they were receiving the same inputs.  I am only saying that IF the entire
> comparison systems was one neuron (it would at least have to have input
> from all voxal element representing neurons - at the same time, so it could
> know how they all compared to one another, all at the same time.)  And if
> this was the case, and if you swapped this entire awareness of it all
> neuron - only then could you swap all the glutamate producing
> representations of the strawberry with positive voltage representations of
> the strawberry - just as the neural substitution argument stipulates is
> required to get the same functionality.  Only then would it behave the
> same.  If only any sub part of the comparison system was substituted, it
> would not be able to function the same.  The way it would fail would be
> different, depending on the type of binding system used.  A real glutamate
> sensor will only say all the surface voxels of the strawberry are all
> glutimate when it is all represented with real physical glutamate and a
> comparison system will only say all the positive voltages (again
> representing the same strawberry) are the same "red" if it knows how to
> interpret all it's physically different representations of "red" as if they
> were red.
>
>
> I think the problem is, whenever you are replacing discrete individual
> small neurons, there is no easy way for it to be aware of whether they are
> all qualitatively alike, all at the same time.  If you give to me any
> example of some mechanical way that a system can know how to compare (or
> better - be aware of) the quality of all the physical representations at
> the same time (I'm doing this by making the entire system be one large
> neuron) it will be obvious how the neural substitution will fail to
> function the same.  If the entire comparison system is one neuron, when it,
> along with all glutamate is replaced by positive voltages, - there would be
> no failure and it would behave the same - as demanded by the substitution
> argument.
>
> I'm having difficulty following what you're saying. I'm simply proposing
> replacing any component of a neurone, or any collection of neurones, with a
> machine that does the same job. There is a type of glutamate receptor that
> changes its shape when glutamate molecules bind, creating a channel for
> sodium and potassium ions to pass through the membrane, and triggering an
> action potential. We could imagine nanomachines in the place of these
> receptors that monitor glutamate and open and close ion channels in the
> same way as the natural receptors, but are made from different materials;
> perhaps from carbon nanotubules rather than proteins. The engineering
> problem would be to ensure that these nanomachines perform their task of
> detecting glutamate and opening ion channels just like the naturally
> occurring receptors. Do you think it is in theory possible to do this? Do
> you see that if it is possible, then neurons modified with these receptors
> *must* behave just like the original neurons?
>
>
> Good example – that helps me to understand more clearly.  Yes, I see that
> if neuron’s are modified [using carbon nanotubes to open and close ion
> channels in the same way that glutamate does] they *must* behave just like
> the original neurons.  I really appreciate you and James sticking with me
> and pointing out all my admittedly sloppy mistakes.  I've spent much time
> rewriting this response, after thinking about all this for many years, and
> I hope I've improved and am not making as many sloppy mistakes with this
> reply.
>
> I still see and theoretically predict that there must be some level, for
> which it can be said that something “has” the redness quality we can
> experience in a bound together way with other diverse qualities.  Of note
> is that something having a redness quality is different than some mechanism
> that can detect this redness quality by being aware of it together with
> other qualities.  And that is the purpose of the binding neuron in my
> example that you are replacing.  It does not have the quality, but only
> detects, by being aware of the glutamate quality vs other physical
> qualities.  So, the binding neuron, itself, does not have the glutamate
> quality, but only allows such qualities to be bound together into unified
> awareness of all diverse qualities.  As for the behavior of a regular not
> exclusive or gate, how the not exclusive or functionality is implemented is
> irrelevant and hardware independent – as long as the output is the same.
> But for this binding neuron, the diverse qualities it can be aware of at
> the same time is critically important to its conscious intelligence.  And
> when you replace this functionality with an abstracted not exclusive or
> gate, you are obviously doing this same function without being aware of nor
> comparing any real physical glutamate qualities.
>
>
> On 12/23/2016 1:59 PM, James Carroll wrote:
>
> On Thu, Dec 22, 2016 at 7:39 PM, Brent Allsop <brent.allsop at gmail.com>
> wrote:
>
>
>
>> But of course, everyone would know this was only functionally the same
>>
>
>
> No, everyone would NOT know that. You are begging the question... since
> the question is whether things that are functionally the same have the same
> qualia. So we would NOT know that it is "only" functionally the same.
>
>
> I think statements like this reveal a key difference in our theoretical
> predictions and that this difference in our thinking is the cause of all of
> our failure to communicate.  For you, it is anything that is functionally
> the same as, that is the neural correlate of qualia.  For you, the qualia
> is downstream, or implemented on top of the functional behavior.  But my
> prediction is that you have this completely backwards.  When we are aware
> of redness and greenness qualities, together, this qualitative awareness is
> what enables us to consciously perform the not exclusive or hugely diverse
> qualitative comparison functionality.
>
> And, again, as I have pointed out in the various week, stronger, and
> strongest ways to eff the ineffable, the prediction is that you and John
> Clark will soon be proven wrong, and that we will be able to find out the
> actual qualities of these physical behaviors, (and how they are bound
> together) and reliably predict when someone's awareness systems is aware of
> glutamate vs glycene psychical qualities, and thereby reliably predict when
> someone is comparing a redness quality with a greenness quality.  The
> prediction is that everyone will be forced by reliable demonstrable science
> to say something like - yes, it is glutamate that has the greenness
> quality.  Everyone will start talking about it in this way, using the term
> "has" a redness quality, instead of using terms like the neural correlate
> of redness.
>
> Another point I feel I should point out is that you are predicting that a
> functional theory of qualia gets around the issue allegedly raised with the
> neural substitution argument.  But I predict that it doesn't.
>
> Perhaps it will help to look at it this way.  Let’s go with your
> functional predictions and move qualia above the hardware level and assume
> that there is some hardware independent function that has the redness
> quality we can experience, and that there is a different function that has
> the greenness quality we can experience, and of course we must be able to
> bind these two qualitative functions together so that it can be said that
> some binding system is our conscious awareness of both of the functional
> qualities.  The detection of these functional qualities, via being
> consciously aware of them, can be said to be the initial cause of us
> reporting that “I am experiencing red”.  Our ability to perform the not
> exclusive or operation consciously is based on our ability to be aware of
> the redness function quality, and know that this is not like the greenness
> function quality.  So, when you do the neural substitution of this system,
> and when you replace the binding / awareness function (whatever enables us
> to be aware of a greenness and a redness function at the same time) and you
> replace the redness and greenness functions with something else, you -
> again, remove the conscious redness and greenness quality based not
> exclusive or function and replace it with something that is again hardware
> independent (or rather independent of the functional quality at this
> level).  The functionalist theory of qualia implies that the true redness
> is some place beyond or based on this logical awareness functioning
> system.  So you must repeat the process, removing the qualitative system on
> which the not exclusive or functionality is based add infinitem.  The best
> you can do is claim that it is functional redness turtles, all the way up,
> and that the only place a redness quality exists (on which our conscious
> not exclusive or functionality is implemented on), is in this infinite
> regressed functionality.
>
> Brent Allsop
>
>
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20161229/82124981/attachment.html>


More information about the extropy-chat mailing list