[ExI] Do digital computers feel?

Dave Sill sparge at gmail.com
Thu Dec 29 21:26:16 UTC 2016


On Thu, Dec 29, 2016 at 4:01 PM, John Clark <johnkclark at gmail.com> wrote:

> On Thu, Dec 29, 2016 at 9:41 AM, Dave Sill <sparge at gmail.com> wrote:
>
>
>> ​> ​
>> the program doesn't "understand" that:
>>
>
> ​Forget the program, if it's not behavior how do know what your fellow
> human beings does or does not understand?​
>

I can never know that, can I? I can only believe one way or the other.


>> ​> ​
>> How the program deals with unexpected conditions like simultaneous red
>> and green lights depends, again, on what the programmer implemented.
>>
>
> ​But even the programer doesn't know what the ​
> programmer implemented
> ​. The programer took 5 minutes to write a program to find the first even
> number greater than 2 ​that is not the sum of two primes and then stop, but
> the programer has no idea what the computer will do when it runs that
> program, even worse the programer doesn't even know if he will ever know.
> The computer will decide for itself when or even if it will stop.
>

The programmer knows what he implemented. That doesn't mean he can predict
the program's behavior for all possible inputs.


> ​> ​
>> that's just silly anthropomorphism.
>>
>
> ​I am using ​
> anthropomorphism
> ​ right now to conclude that you are probably conscious. Am I being silly?
>

No, I but I might be.


> What I have done is draw an analogy with the only thing in the universe
> known with absolute certainty to be conscious (me) with another thing that
> behaves in complex ways that have certain similarities with the way I
> behave (you).  ​If something behaves rather like me I conclude it is
> probably conscious rather like me. I could be wrong but it's the best I can
> do.
>

Agreed.

​> ​
>> The program doesn't "feel" or "want" anything
>>
>
> ​How do you know this? How do you know your fellow human beings feels or
> wants anything?​
>
>

I don't know it, I believe it, based on my understanding of computer
science and digital logic.

​> ​
>> brains are extraordinarily complex and not well understood.
>>
>
> ​But you think you understand brains well enough to understand they are
> not complex enough ​
> ​to make an AI.​
>

Huh? Straw man much? I'm pretty sure we haven't created a human-level
general AI, yet, but I'm also pretty sure we probably will someday.

-Dave
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20161229/13afd236/attachment.html>


More information about the extropy-chat mailing list