[ExI] Wernicke's aphasia and the CRA.

Alfio Puglisi alfio.puglisi at gmail.com
Sat Dec 12 17:44:29 UTC 2009


On Sat, Dec 12, 2009 at 5:29 PM, The Avantguardian <
avantguardian2020 at yahoo.com> wrote:

> >From: Alfio Puglisi <alfio.puglisi at gmail.com>
> >To: ExI chat list <extropy-chat at lists.extropy.org>
> >Sent: Sat, December 12, 2009 6:13:44 AM
> >Subject: Re: [ExI] Wernicke's aphasia and the CRA.
> >
> >
> >I interpret the replacement as using a different electronic equivalent for
> each neuron, so that their specific functions (if any) will be preserved.
>
> Even so, how could you map that function over the domain of inputs and
> range of outputs? How precisely is "close enough"? Does the function even
> remain the same over the life of a neuron? For a simple mathematical example
> of the problem, consider the functions y=x+13 and y=(x^2-169)/(x-13). Over
> all of the infinite possible values of inputs x they are "functionally
> equivalent" and give rise to the same output. . . *except* where x=13. When
> you *know* the functions, the difference is obvious. But if the functions
> were hidden within a "black box" and all you could do was plug in random
> values of x and look at the output, would you notice a difference between
> the two?
>

The replacement would not be a perfect clone, on this I can agree. But we
are not plugging in random values in a neuron and observing the output. In
the replacement case, you observe a neuron in its day-to-day function, which
is not random at all, but very representative. I don't think you would
notice any difference between the original and the replacement after an
observation period of some days or weeks.



> Understanding the neurons' inner working is not needed if you can exactly
replace their input/output functions (not an easy feat anyway...) Whether
consciousness resides inside single neurons is another matter. In that case,
inner workings will need to be replicated too.
>
>Searle's arguments remind me of good old-fashioned dualism: there is
something in our brain cells that can't be replicated in a mechanical or
electronic equivalent. But without knowing what this "something" is, that's
just an article of faith.

Forget brains or neurons for the moment. Sodium is a metal that
> spontaneously burns when it contacts water. Chlorine is a deadly poisonous
> gas. When you combine the two in a test tube, you get salt. What does the
> electronic or mechanical equivalent of salt taste like?
>

It tastes like silicon or iron :-) That was the wrong question. A better
question is: what taste is the electronic equivalent feeling? Looking at a
human brain, you would never guess the answer. It is clear that something is
causing feeling in the human brain, but we don't know what it is. It is
allowed to hypothesize that the "something" is specific to the biological
brain, in the same way that a neuron is. That it can't be also replicated on
another substrate is a conjecture that can't be proved until the first issue
is resolved. It could happen that consciousness come out to be a property of
some specific matter arrangement. Say, like electrical charge requires
electron, protons or other specific particles. But I'm not holding my
breath.


 Alfio
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091212/f10162a9/attachment.html>


More information about the extropy-chat mailing list