[ExI] Do digital computers feel?
brent.allsop at gmail.com
Wed Feb 8 17:24:52 UTC 2017
"Anyway, these are peripheral considerations to the central argument. I
have asked you to state what you think would happen if a substitution were
made with a component that has the same *observable behaviour* as the
neural component you think is essential for particular qualia."
I thought I have answered this many times, so thanks for letting me know
that I'm still not communicating. Let me try to clearly answer this
Absolutely, yes, according to a qualitative blind definition of
"*observable behaviour*" the behaviour would be the same. That is why I
always talk about two people behaving identically (finding and picking
strawberries), yet they have inverted red/green qualia. Since the
"*observable behaviour*" is qualia blind, it sees the identical behaviour
of the two people behaving the same, but it is blind to the different
behaviors of the inverted qualitative awareness.
When you include in the system, the behaviour that is the redness
awareness, and the detectably different behaviour that is the greenness
awareness - the external behaviour is the same, but they are finding the
strawberry for inverted behavioural reasons or they are finding the
strawberry for qualitatively inverted initial causal behaviours.
Again, what is required is some well defined or testable way to
qualitatively eff ineffable qualities. What makes something ineffable is
the fact that an abstracted representation like the word red, does not have
a redness quality. So without having some kind of way to know how to
interpret an abstracted representation to get back to the original quality
of the composite knowledge being observed to know the intended qualitative
meaning of a word like red, one remains qualia blind.
So, you must have some kind of minimal awareness behavioural requirements
like including two qualitatively diverse representations of knowledge, and
a way to bind them together to form a composite qualitative conscious
awareness. This diverse composite qualitative awareness behaviour needs to
be the behavioural mechanism that enables the system to answer questions
like: "No, my qualitative knowledge of red is more like your qualitative
knowledge of green."
There are many testable theoretical ways one might achieve this kind of
detectably diverse qualitative composite awareness with materialist
theories. I only use glutamate, because it is the simplest and most
straight forward to understand. I've tried to find some functional way the
behaviour of redness knowledge could have distinguishable from greenness
behavioural properties, but not only can I not do it, it seems impossible.
You said: "I don't see why you should consider this 'miraculous'". To me,
if it is impossible to come up with any theoretically testable way to to do
this kind of detectable effing of the ineffable within a functionalist
theory, then the only conclusion a reasonable person can come to is that it
is some kind of "miracle." In order for one to not think it is simply
magic, someone must falsify the belief that it can't be done, by providing
any kind of theoretically possible way to observe qualitatively diverse
awareness behaviour in a detectable effing of the ineffable way.
On Tue, Feb 7, 2017 at 11:13 PM, Stathis Papaioannou <stathisp at gmail.com>
> I think the critically important part is the behaviour of the system, not
> a particular substance or physics. Intuitively, this seems more likely
> because consciousness has evolved with information processing, a behaviour
> of the system rather than isolated components of the system. Brains
> evolved with what materials happened to be available, and could have
> evolved with completely different neurotransmitters, for example, or even a
> completely different chemistry. It seems implausible that through luck we
> ended up with the only materials that lead to consciousness.
> And I don't see why you should consider this "miraculous" but have no
> problem with qualia being attached to particular substrates such as
> glutamate, which you said when commenting on the "hard problem" in an
> earlier post that you would simply accept as a brute fact.
> Anyway, these are peripheral considerations to the central argument. I
> have asked you to state what you think would happen if a substitution were
> made with a component that has the same *observable behaviour* as the
> neural component you think is essential for particular qualia. By "what you
> think will happen" I mean both what do you think the behaviour of the
> person with the brain will be like - will it change or stay the same? - and
> what do you think the qualia of the person will be like - will they change
> or stay the same? Surely I have put this question in a clear enough way (if
> not, tell me), and surely with all the thinking you have done on this
> subject you will have an answer, even if you think the question is
> unimportant or misses the point.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat