[ExI] Do digital computers feel?

Jason Resch jasonresch at gmail.com
Fri Dec 23 05:09:51 UTC 2016


Is what you are describing with the two neurotransmitters for red and green
light merely the opponent process theory of color vision (
https://en.wikipedia.org/wiki/Opponent_process )? According to this view,
the visual system does not transmit RGB values, as computers do, instead
different colors are paired as opposing. Red vs. Green, Blue vs. Yellow,
Black vs. White. The more red light that falls on the retina, the faster
the neuron firing rate, and the more green light the slower (I might be
mixing up which one speeds it up vs. which one slows it). If so, then we
would expect the same neurotransmitters pairs of red/green, to also appear
for yellow/blue. But then you have to explain, why in one circumstance, the
neurotransmitter is associated with red in some contexts and blue in others.

The opponent process explains why there are bluish greens (cyan), bluish
reds (purple), yellowish reds (orange), but no yellowish blues, and no
reddish greens.

I don't think there is anything special about the neurotransmitters here,
aside from their effect on either accelerating, or decelerating the rate of
firing. There are thousands, if not millions of colors that can be
perceived by the eye, and each might be considered its own quale in its own
right, but there are not millions of different chemicals or
neurotransmitters.

Some humans can see 4 primary colors. It is due to having an extra type of
color sensing cone in their retina, not due to a difference in brain
chemistry. This was demonstrated when monkeys that were normally color
blind had their retinas infected with a retrovirus that inserted a gene to
create a third color-sensing cone. After a few weeks their brains learned
to adapt to the new signals and they could perceive a new primary color.
Given that the change was purely one that affected the eye, and the
information the eye sent to the brain via the optic nerve, I don't see how
the outcome of this experiment can be explained in terms of altered
neurochemistry of the brain. It appears, rather, that the brain began to
process the signals differently, and the new informational states it
realized led to new patterns and thereby new perceptions.

Jason

On Thu, Dec 22, 2016 at 10:44 PM, Brent Allsop <brent.allsop at gmail.com>
wrote:

>
> Hi Stathis,
>
>
> Hmmm, I'm having troubles understanding what you are saying.  You seem to
> be not understanding what I am trying to say as in no place did I intend to
> say that any functionally equivalent neurons would behave differently when
> they were receiving the same inputs.  I am only saying that IF the entire
> comparison systems was one neuron (it would at least have to have input
> from all voxal element representing neurons - at the same time, so it could
> know how they all compared to one another, all at the same time.)  And if
> this was the case, and if you swapped this entire awareness of it all
> neuron - only then could you swap all the glutamate producing
> representations of the strawberry with positive voltage representations of
> the strawberry - just as the neural substitution argument stipulates is
> required to get the same functionality.  Only then would it behave the
> same.  If only any sub part of the comparison system was substituted, it
> would not be able to function the same.  The way it would fail would be
> different, depending on the type of binding system used.  A real glutamate
> sensor will only say all the surface voxels of the strawberry are all
> glutimate when it is all represented with real physical glutamate and a
> comparison system will only say all the positive voltages (again
> representing the same strawberry) are the same "red" if it knows how to
> interpret all it's physically different representations of "red" as if they
> were red.
>
>
> I think the problem is, whenever you are replacing discrete individual
> small neurons, there is no easy way for it to be aware of whether they are
> all qualitatively alike, all at the same time.  If you give to me any
> example of some mechanical way that a system can know how to compare (or
> better - be aware of) the quality of all the physical representations at
> the same time (I'm doing this by making the entire system be one large
> neuron) it will be obvious how the neural substitution will fail to
> function the same.  If the entire comparison system is one neuron, when it,
> along with all glutamate is replaced by positive voltages, - there would be
> no failure and it would behave the same - as demanded by the substitution
> argument.
>
>
> Brent
>
>
> On 12/22/2016 8:25 PM, Stathis Papaioannou wrote:
>
>
>
> On 23 Dec. 2016, at 1:39 pm, Brent Allsop <brent.allsop at gmail.com> wrote:
>
> I tried to explain that it wouldn't be identical behavior, until the
> entire substitution.
>
> I think the issue is, as James Charles has also pointed out, that you
> contradict yourself by allowing that the artificial neurone will interact
> with the the other neurones normally (which is of course crucial to the
> experiment) but then saying that the other neurones will behave
> differently. How could the other neurones possibly behave differently, if
> they are receiving the same inputs they would normally receive?
>
> _______________________________________________
> extropy-chat mailing listextropy-chat at lists.extropy.orghttp://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20161222/ae73dd67/attachment.html>


More information about the extropy-chat mailing list