[ExI] Do digital computers feel?

Stathis Papaioannou stathisp at gmail.com
Fri Dec 23 07:37:15 UTC 2016



> On 23 Dec. 2016, at 3:44 pm, Brent Allsop <brent.allsop at gmail.com> wrote:
> 
> 
> Hi Stathis,
> 
> 
> Hmmm, I'm having troubles understanding what you are saying.  You seem to be not understanding what I am trying to say as in no place did I intend to say that any functionally equivalent neurons would behave differently when they were receiving the same inputs.  I am only saying that IF the entire comparison systems was one neuron (it would at least have to have input from all voxal element representing neurons - at the same time, so it could know how they all compared to one another, all at the same time.)  And if this was the case, and if you swapped this entire awareness of it all neuron - only then could you swap all the glutamate producing representations of the strawberry with positive voltage representations of the strawberry - just as the neural substitution argument stipulates is required to get the same functionality.  Only then would it behave the same.  If only any sub part of the comparison system was substituted, it would not be able to function the same.  The way it would fail would be different, depending on the type of binding system used.  A real glutamate sensor will only say all the surface voxels of the strawberry are all glutimate when it is all represented with real physical glutamate and a comparison system will only say all the positive voltages (again representing the same strawberry) are the same "red" if it knows how to interpret all it's physically       different representations of "red" as if they were red.
> 
> 
> I think the problem is, whenever you are replacing discrete individual small neurons, there is no easy way for it to be aware of whether they are all qualitatively alike, all at the same time.  If you give to me any example of some mechanical way that a system can know how to compare (or better - be aware of) the quality of all the physical representations at the same time (I'm doing this by making the entire system be one large neuron) it will be obvious how the neural substitution will fail to function the same.  If the entire comparison system is one neuron, when it,       along with all glutamate is replaced by positive voltages, - there would be no failure and it would behave the same - as demanded by the substitution argument.
> 
I'm having difficulty following what you're saying. I'm simply proposing replacing any component of a neurone, or any collection of neurones, with a machine that does the same job. There is a type of glutamate receptor that changes its shape when glutamate molecules bind, creating a channel for sodium and potassium ions to pass through the membrane, and triggering an action potential. We could imagine nanomachines in the place of these receptors that monitor glutamate and open and close ion channels in the same way as the natural receptors, but are made from different materials; perhaps from carbon nanotubules rather than proteins. The engineering problem would be to ensure that these nanomachines perform their task of detecting glutamate and opening ion channels just like the naturally occurring receptors. Do you think it is in theory possible to do this? Do you see that if it is possible, then neurons modified with these receptors *must* behave just like the original neurons?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20161223/63911390/attachment.html>


More information about the extropy-chat mailing list