[ExI] Do digital computers feel?

Dave Sill sparge at gmail.com
Thu Dec 29 14:41:20 UTC 2016

On Thu, Dec 29, 2016 at 8:29 AM, Jason Resch <jasonresch at gmail.com> wrote:

> The software of a self driving car can differentiate a red light from a
> green light. It's high level functions know when it is seeing a red light
> it should stop, and when it sees a green light it can proceed.

OK, although "know" is not the right word.

The high level part of the program understands there is a fundamental
> difference between these two states, and that they are exclusive: it should
> never expect to see a simultaneous red-green state.

No, the program doesn't "understand" that: the programmer(s) implemented
that requirement. How the program deals with unexpected conditions like
simultaneous red and green lights depends, again, on what the programmer
implemented. A safe thing to do would be recognize the red light regardless
of the whether the green or yellow lights are lit and treat it as "do not

> Millions of bytes of raw pixel data were distilled down to this binary
> sensation, which puts the driving software into states of different
> feelings: "the sensation of needing to stop" and the "the sensation of
> wanting to go".

No, that's just silly anthropomorphism. The program doesn't "feel" or
"want" anything: it processes the inputs and does what it's told. Does a
"hello world" program want to print "hello, world"? No, it just does,
because the processor executes the instructions it was given.

> If we added the ability to speak in english to this high level driving
> software, we could ask it to describe the difference between red and green
> lights, but it wouldn't be able to describe it any differently than in the
> terms of how it makes it feel, since the high level part of the program
> doesn't have access to the low level raw pixel data.

No, you can't just give a program the ability to speak. Pretending you can
and imaging what it would say is silly. If you added speech capabilities to
a program, what it would say depends entirely upon how those capabilities
are coded. Like the driving code, it'd process inputs and do what the
programmer directed it to do.

It is thought that the brain is similarly organized, Fodor's Modularity of
> Mind is an example. In this idea, the brain has many specialized modules,
> which take a lot of inputs and produce a simplified output shared with
> other regions of the brain. We experience this, rather than redness as the
> frequent action potentials of neurons connected to red-sensing cones in our
> retina, just as self driving cars perceive only the need to stop or the
> need to go, rather than the RGB values collected by its cameras.
> All this goes to say, you can't explain the experience of red without
> explaining a good part of your brain and how the experience effects all the
> other parts of your brain. Quale aren't simple, they are extraordinary
> complex.

Yes, brains are extraordinarily complex and not well understood.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20161229/26652abc/attachment.html>

More information about the extropy-chat mailing list