[ExI] Do digital computers feel?

William Flynn Wallace foozler83 at gmail.com
Thu Dec 22 21:57:56 UTC 2016


​But you couldn't deduce anything unless you've already taken it as a axiom
that intelligent behavior implies consciousness. All I'm asking is you play
by the same rules when dealing with computers.  John

Quadrillions, nonillions, it doesn't matter.  All I am trying to say is
that we are talking about the most complex thing known to man and reducing
it to code.  It just boggles my mind.  Even if you could hook up every
neuron, every glial cell for recording purposes, and assuming that the
hookups did not interfere with the functions (which I would very, very
seriously doubt), then maybe you could do it.  Happy now?

No, every cell is just atoms and I agree that computers and people are
alike in that way - no magical something to account for anything including
consciousness.

But by your own logic, you could never tell if a computer program was
conscious and could feel.  By my own logic, all we could do it sample
behavior and induce, followed by deduction and further testing.

Suppose instead of uploading a real brain, it was built from the getgo with
code - the way they are doing it now.  Now suppose that it passes all the
Turing tests and whatever.  Would such an advanced computer be capable of
lying?  Yes?  Then it can hide its feelings or consciousness from us.  It
could be paranoid, like HAL.

BTW -  connection between intelligence and consciousness.  There is no
evidence that an amoeba has any memory, which to me would make a case for
consciousness.  All beasties above that have memories and can learn.  They
'view' the world with eyes or something and move around in it as if they
had a plan to get food, mate, make nests and everything else creatures do -
i.e. purposeful behaviors (mark the 'as if' - I am making no statements
that are teleological).  To me that means consciousness and intelligence
down as far as the paramecium (if that's the next creature up and I dunno).
Ever hear of the Wormrunner's Digest?  Worth a Google.  I met the man and
taught 101 out of his text.

bill w

On Thu, Dec 22, 2016 at 3:31 PM, Brent Allsop <brent.allsop at gmail.com>
wrote:

>
> Oh great.  Thanks, James, for this reply.  I realized after I sent my
> post, that I left a few important things out, and you are clearly pointing
> these omissions out.
>
>
>
> The difference is that computer functional logic is all implemented above
> and abstracted away from the quality of the physical hardware level.  All
> representations have a translation or transduction system that physically
> translates between all the different physical representations, so they can
> all be thought of or function as 1s and 0s.  But we are different.  The
> physical quality of our representations is all important, and included in
> all of the comparison and intelligent processing systems.  With us, we
> can be aware of and reflect on what they are like, but with a computer, all
> that is abstracted away by all the hardware translators.  So, true
> Chalmers admitted that the fading / dancing qualia is a possibility, and
> this is exactly what this theory predicts will happen.  If the comparison
> system can detect a phenomenal quality of positive voltages and zero
> voltages, then there will be dancing qualia, as you make the substitution.
> If there is no qualia at all, it will be fading qualia.  Except that
> qualitatively, you will be able to tell with the first comparator
> substitution.  The prediction is that you will never be able to construct
> any of the comparitors to say glutamate is the same as +5 volts.  So you
> will not be able to “flip the switch” between the first comparator
> substitution, and not see a difference between them.  True, you will be
> able to replace everything, and eventually it will start functioning
> entirely identically.  But, as the wave of conversion progresses
> partially along, this theory is predicting there will clearly be dancing /
> fading qualia, until everything is replaced and the quality of the
> representations becomes entirely irrelevant - abstracted away from the
> quality of the physical layer - everyone admitting that there is clearly a
> big difference due to the dancing / fading qualia as you progressed to the
> eventually completely identical behavior.
>
> Brent Allsop
>
>
> On Thu, Dec 22, 2016 at 1:52 PM, James Carroll <jlcarroll at gmail.com>
> wrote:
>
>> On Thu, Dec 22, 2016 at 12:23 PM, Brent Allsop <brent.allsop at gmail.com>
>> wrote:
>>
>>>
>>> With that, you should be able to see the flaw in this neural
>>> substitution logic.
>>>
>>
>>
>> Why?
>>
>> I don't yew see how your discussion of translation leads to a flaw in the
>> logic. You may have a point, but I am failing to grasp it given what you
>> have written above.
>>
>> First, it appears that your comparator neuron fires when the inputs are
>> the same... so that is implementing a -XOR, rather than an XOR (which fires
>> if they are different), but that is trivial. I will just assume you meant
>> not XOR.
>>
>> So, if I understand you correctly, we have two neuro-transmitters, g and
>> a, with you so far. We then have a comparator neuron C... with you so
>> far... and two input neurons A and B... with you so far... C fires if A and
>> B both dump the same neurotransmitter, and don't fire if they dump
>> different neurotransmitters. Is that correct?
>>
>> Now I replace one of those input neurons, A, (and potentially other
>> neurons on the upstream side of A) with a mechanical copy A_m... then I put
>> a translator on the output of neuron A_m at the input to comparator C. The
>> translator dumps chemical g if the output of A_m is a 1, and chemical a if
>> the output of A_m is a 0. This is necessary for A_m to properly talk to C
>> in the same way that A did before. Ok... good so far. Now C fires if it
>> gets chemical g from both inputs A_m (translated) and B, or chemical a from
>> both inputs A_m(translated) and B. Now C's behavior is identical both
>> before and after A was replaced with A_m.
>>
>> Now we can continue down the chain... I can now replace C with C_m.. now
>> no translation between A_m and C_m is needed, but a new translation step is
>> needed between B and C_m, as well as between C_m, and whatever it's output
>> is hooked to... let's call that D. Now I must translate between C_m and D.
>> So... as I expand the number of neurons that are replaced with mechanical
>> versions (_m neurons), there is a translation step needed between each
>> neuron that is mechanical, and each that isn't. You can think of this as an
>> expanding wave of mechanical neurons, with a translation step at the edge
>> of the wave. As this wave moves across the brain, the brain's behavior
>> remains unchanged. But IF consciousness is tied to the substrate, the
>> consciousness of the brain is changing, while it's behavior is not changed.
>> This is the concept of fading and "dancing" qualia that Chalmers described
>> in his paper.
>>
>> And if you believe in fading and dancing qualia, then you believe in a
>> form of qualia that is essentially epiphenomenal! But my qualia are NOT
>> epiphenominal. They impact my behavior... For example, I say "red is
>> beautiful" because my qualia of red affects my decision to say that. If you
>> substitute a few neurons in my brain, and I STILL say "red is beautiful"...
>> then I still have the qualia of red, and it hasn't faded.
>>
>> I fail to see how your discussion of the comparator neuron changes this
>> in any significant manner... it's just an example of exactly what we have
>> been describing all along.
>>
>> James
>>
>> --
>> Web: http://james.jlcarroll.net
>>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20161222/9efd3e59/attachment.html>


More information about the extropy-chat mailing list