[ExI] Do digital computers feel?

Stathis Papaioannou stathisp at gmail.com
Mon Feb 20 05:48:54 UTC 2017


Brent Allsop wrote:

>Dang, not quite communicating yet.  You keep saying this over and over
again.  I, also, over and over again in reply, try to describe the many
problems that I see with this.  Thanks to all your help, I'm hopefully
getting better each time.  But you never provide any evidence that you are
trying to understand the problems I'm trying to describe.  All you seem to
do is repeat over and over again with your overly simplistic system that A:
the brain is a system made of parts, that B: each part interacts with
neighboring parts, and finally C: if you replace one part with a different
part that interacts with its neighbors in the same way, then the system as
a whole will behave in the same way.

Can you state if you AGREE or DISAGREE that replacing a part with another
part that interacts with its neighbours in the same way as the original
will result in the whole system behaving the same? If you DISAGREE can you
give an explanation of how this could happen as to me it seems logically
impossible.

>In addition to all the "hard" (as in impossible) problems that result with
your insufficient swapping steps, there is this: I know I (there I didn't
say "we", are you happy John?) can be conscious of 1: redness and 2:
greenness at the same time, as a composite experience.  And 3: using this
composite awareness of each of these qualitatively different
functionalities express that they are different.  With the system that you
describe, and the simplistic way you do the do the neural substitution on
"parts" with minimal interactions with their neighbors, it isn't possible
to do the 3 above described functionalities without completely ignoring
them.  You must do a substitution on some kind of system that has a
reasonable chance of modeling the 3 mentioned functionalities adequately to
be able to make any kind of claim that you know what is going on,
phenomenally, with the neural substitution.  Plain and simple, your system
is completely qualia blind, like all the experimental neuro science being
done today that I know of.

The "simplistic" substitution will reproduce all the behaviour of the
brain, including the behaviour associated with the composite experience and
comparison of red and green. The behaviour associated with the composite
experience of red and green includes, for example, the subject being able
to pick the red strawberries among the green leaves and saying, when asked,
"the strawberries are red, the leaves are green, and everything looks
exactly the same as it did before you told me the substitution in my brain
was made". The reason this behaviour will stay the same is that the
subject's muscles will contract in the same sequence because they receive
the same sequence of neural stimuli. Given the "simplistic" substitution,
it is logically necessary that this is what will occur.

>If you do a neuro substitution on any system which does have sufficient
detail to at least model these 3 necessary functions (my simplified
glutamate theory for example), there will be no "hard problems", and
everything we subjectively know about how we can be aware of diverse
composite qualitative experiences, will be sufficiently modeled.  We will
be able to understand why the simplistic neural substitution of your system
is qualia blind and leads some to think there are "hard problems".  We will
be able to say we understand how these composite subjective experiences
work and why, both subjectively and objectively, as the neuro substitution
progresses.

To do the replacement you don't have to model anything or understand
anything about the higher level function of the brain. All you have to do
is observe and model the individual parts that you are replacing. You can
completely ignore every function of the brain and be confident that it will
be reproduced, just as you can be confident that every function of your
computer will be reproduced if its switchmode power supply is replaced with
a battery that can supply the same voltage and current; you don't have to
worry that MS Word will run properly but Adobe Photoshop will not.

On 20 February 2017 at 11:08, Ben <bbenzai at yahoo.com> wrote:

> Brent Allsop wrote:
> > I think it is true that "If you know something, there must be something
> that is that knowledge."  Would you agree?
>
> No.
> Not some/thing/.
>
> I don't think knowledge is a 'thing', it's a process. As John K Clark
> would put it, knowledge isn't a noun, it's more like a verb or an
> adjective. This means that there is no such thing as 'a knowledge', but
> there is such a thing as 'knowing'.
>
> More conventionally put, knowledge (and experience) is an
> information-process.
>
> So your statement above could be reworded: "If you know something, there
> must be an information process that is that knowing".
>
> > For example, you pointed out that you can produce an after image
> experience by staring at cyan for a while and then quickly looking at
> white.  I think it is very telling about what you are ignoring in this
> example, in that you didn't actually say the result was a redness
> experience.
>
> I know what the result is. I wasn't ignoring it, I was leaving it for the
> reader to discover. Again, it's important not to confuse the 'redness
> experience' for a thing. It's a process. In this case, a process that is
> the experience of something that isn't there. Which was the point of using
> that example.
>
>
> > You say it is: " 'conjured up' by our visual system."  But I ask you,
> what is it, that is conjured up?  Is it not knowledge that has a redness
> quality which you can experience as the final result of the processing of
> your visual system?
>
> And again, no 'thing' is conjured up. There's just the conjuring itself.
> That is the process of experiencing a red ball right in front of you. What
> I mean by 'conjuring up' is that  a vast amount of information is combined
> in various ways. No 'things' are involved, except as components of the
> substrate that embodies the processing (membranes and ions, mostly).
>
> You might ask "Yes, but what does that consist of?". The only answer we
> can give is that the process is embodied as patterns of neural activation
> that lead to responses such as wanting to kick the red ball, or running
> away, if you happen to be afraid of big red balls, or saying "Oooh, look, a
> big red ball!", etc.
> We don't yet know exactly what the information processing consists of, we
> just know that it's fantastically complex and that our brains do it easily.
> One day we will know, and then we'll be able to build new minds, and
> understand our own.
>
> Because it's a process, the actual embodiment doesn't matter, as long as
> it's capable of doing the required processing. A planet full of beer-cans
> connected with string could do it (slowly), or a large computer, a massive
> ant colony, etc. (have you read "Wang's Carpets" by Greg Egan? That
> contains a good description of this idea). Anything that can process
> information with the required degree of complexity, provided it was
> connected to suitable inputs and outputs, can do it.
>
>
>
> Ben Zaiboc
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>



-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20170220/fe9aa06b/attachment.html>


More information about the extropy-chat mailing list