[ExI] Do digital computers feel?

John Clark johnkclark at gmail.com
Mon Feb 20 01:50:29 UTC 2017


On Sun, Feb 19, 2017 at 5:41 PM, Brent Allsop <brent.allsop at gmail.com>
wrote:

>   All you seem to do is repeat over and over again with your overly
> simplistic system that A: the brain is a system made of parts, that B: each
> part interacts with neighboring parts, and finally C: if you replace one
> part with a different part that interacts with its neighbors in the same
> way, then the system as a whole will behave in the same way.
>
​That admirably summarizes my position, except that I see nothing overly
simplistic about it. All complex objects are made of simpler parts, and the
only important thing about a part is the way it interacts with other parts.


​

​> ​
In addition to all the "hard" (as in impossible) problems that result with
your insufficient swapping steps

> ​I don't understand this objection of yours.​


> ​> ​
> there is this: I know I (there I didn't say "we", are you happy John?)
>
> ​Yes​ because I think its important not to be a organic bigot. Without
exception every single objection to a computer being conscious can also be
used to argue that none of your fellow human beings are conscious. If the
argument is good for one it's good for the other, and so both should be
treated equally. I might be paraphrasing just a tad but I think Martin
Luther King said:

*I have a dream that one day ​we will ​live in a nation where ​intelligent
beings will not be judged by ​whether their brain is made of silicon or
carbon but by the content of their character.​ I have a dream​!​ *

​Well... that's how I remember what he said anyway.​

> ​> ​
> can be conscious of 1: redness and 2: greenness at the same time, as a
> composite experience.
>
> ​And a computer can remember 2 different things at the same time.​

> ​> ​
> And 3: using this composite awareness of each of these qualitatively
> different functionalities express that they are different.
>
> ​If they were not different they wouldn't be 2 things, it would be one
thing. ​

​> ​
> And 3: using this composite awareness of each of these qualitatively
> different functionalities express that they are different.


​Yes, but I don't see your point. If there is somebody around here ​
​claiming that red is the same as green it certainly isn't me.​

> ​> ​
> With the system that you describe, and the simplistic way you do the do
> the neural substitution on "parts"
>
> ​Why the quotation marks? For some reason reductionism isn't very trendy
nowadays but it's what makes science work.

​> ​
> with minimal interactions with their neighbors,
>
> ​Nobody is demanding ​
minimal interactions with their neighbors
​, let the interactions be gargantuan if you like, but if internal changes
to a part produce no changes in the way that part interacts with other
parts then they make no change to the overall behavior of the system.

> ​> ​
> Plain and simple, your system is completely qualia blind, like all the
> experimental neuro science being done today that I know of.
>
> The system as described is certainly not blind, it can distinguish between
red and green as well as you can, maybe better. It's true that I can't
prove that the system as described actually experiences qualia, but then I
can't prove that the system called "Brent Allsop" actually experiences
qualia either, not unless I accept the postulate that Charles Darwin was
correct.

> ​> ​
> If you do a neuro substitution on any system which does have sufficient
> detail to at least model these 3 necessary functions
>> ​ [...]
>
> ​Then​

​the substituted neuron will effect other neurons differently than the way
the ​original neuron did and it's just a bad simulation and all you've
learned is that a bad simulation will bring no enlightenment to anyone.


 John K Clark

​
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20170219/877a6af8/attachment.html>


More information about the extropy-chat mailing list