[ExI] Do digital computers feel?

Colin Hales col.hales at gmail.com
Mon Feb 6 12:03:53 UTC 2017


$0.02

"But I'm only asking at this point about *observable behaviour*, ignoring
qualia completely. It is my contention that if we do this the qualia will
emerge automatically and it is your contention that they won't. But in
order to figure out who is right you have to consider the experiment as I
have proposed it; you can't assume your conclusion in the premises."

There are two different tests of a hypothesis that (e.g. Stathis position)
H1= "There is no brain physics that is essential for qualia"

====================
TEST 1)
Assume it's true, pay no regard to any brain physics. Compute models of the
brain. Compare/contrast behaviour of a test system artificial brain with
natural brain. Draw conclusions about H1.

TEST 2)
Assume it's false. Select particular physics that might be held accountable
for qualia. Replicate the targetted physics.Compare/contrast behaviour of

a test system artificial brain with natural brain
a test system artificial brain with the TEST 1 equivalent.

Draw conclusions about H1.
====================
For seventy five years we have thrown out the physics and done 100% TEST
1). To throw that physics out all you have to do is use a computer. Nobody
in the entire history of science has ever made this stupid oversight before
and it's a mistake that could only be made ONCE: when computers were
invented.

I can think of a perfect candidate for test 2). It's right there in front
of everyone. The proof? TEST 1) _AND_ TEST 2) combined in a proper science
activity. But why should my favorite be right? Got some other physics you
think does it .... so what!? TEST 1) _AND_ TEST 2). Same story. My
particular choice for essential physics is irrelevant.

It doesn't matter what anyone thinks about qualia origins. Magical
emergentism or denialism or quantum squiggly-doodahs. If you confine
yourself to TEST 1 forever you are screwed. Until we start getting testing
right and do fully formed actual empirical science the science is malformed
and this whole argument is likewise screwed.

This science is embrarassingly and egregiously broken.

Colin



On Mon, Feb 6, 2017 at 6:12 PM, Stathis Papaioannou <stathisp at gmail.com>
wrote:

> Brent Allsop wrote:
>
> <When you talk about "*only observable behaviour*" you are assuming a
> definition of "observe" that is completely qualia blind.  There isn't
> something special about qualia, but there is something qualitative, which
> can't be observed by simple "*only observable behavior*".  You can detect
> and represent qualia with any physical behavior you want, but you can't
> know what an abstracted representation of what you have detected
> qualitatively represents unless you know how to interpret that behavior
> back to the real thing.  In order to include the qualities of conscious
> experience into a definition of observation, you must provide a definition
> of observe that properly qualitative interpretation of any abstracted
> representations into whatever it is that has a redness quality being
> observed.>
>
> But I'm only asking at this point about *observable behaviour*, ignoring
> qualia completely. It is my contention that if we do this the qualia will
> emerge automatically and it is your contention that they won't. But in
> order to figure out who is right you have to consider the experiment as I
> have proposed it; you can't assume your conclusion in the premises.
>
> <I imagine a simple-minded engineer working to design a perfect glutamate
> detection system that can't be fooled into giving a false response by any
> other not glutamate substance or system.  It is certainly possible that
> some complex set of functions or physical behavior, like glutamate, is one
> and the same as something we can experience as a complex redness, right and
> that nothing else will have the same physical function or quality?>
>
> I don't understand this paragraph. Do you accept that it is possible to
> make a reliable glutamate detector, a device that tells us only if the
> substance in question is glutamate or not?
>
> <Once your simple minded engineered glutamate detector is working, it will
> never find anything that is glutamate in the rods and wires engineered to
> simulate glutamate in an abstracted way.  Also, without having the correct
> translation hardware, you will not be able to interpret any abstracted
> representations of glutamate (redness), as if it was glutamate (redness),
> let alone, think it is real glutamate (real redness).>
>
> That's all OK - the glutamate detector just detects glutamate, real
> glutamate, and nothing but glutamate. So if there is glutamate in the
> synaptic cleft, the detector in the postsynaptic neuron will detect it. In
> this example the detector is not replacing glutamate but the glutamate
> detector in the neurons, which is the glutamate receptor protein. To
> replace the glutamate you would have to find another molecule or
> nanostructure that is released when glutamate would be released and that
> stimulates the glutamate receptors in the same way as glutamate does.
>
> <And of course, when you neural substitute the glutamate and its detector,
> out for some simulation of the same, the neural substitutuion fallacy
> should be obvious.  It will only work when you completely swap out the
> entire detection system with something else that knows how to properly
> interpret that which is not glutamate, as if it was representing it.  Only
> then will it be *observably the same behavior*.  But nobody will claim that
> your simulation has any real glutamate being used for representations of
> knowledge - glutamate being something that physically functions identically
> with glutamate (or redness) without any hardware interpretation mechanism
> being required.>
>
> So are you agreeing that if you replace the glutamate with a substance
> which is released when glutamate would be released and which stimulates the
> glutamate receptors when glutamate would stimulate the receptors, the
> neurons would fire in the same order and for the same duration as the
> unmodified neurons would have? Remember this is just a question about the
> *observable behaviour* of the system. Once you answer this question (yes or
> no) you can then answer the additional question of whether the red qualia
> would be preserved in the modified system.
>
>
> --Stathis Papaioannou
>
> On 6 February 2017 at 15:14, Brent Allsop <brent.allsop at gmail.com> wrote:
>
>> When you talk about "*only observable behaviour*" you are assuming a
>> definition of "observe" that is completely qualia blind.  There isn't
>> something special about qualia, but there is something qualitative, which
>> can't be observed by simple "*only observable behavior*".  You can detect
>> and represent qualia with any physical behavior you want, but you can't
>> know what an abstracted representation of what you have detected
>> qualitatively represents unless you know how to interpret that behavior
>> back to the real thing.  In order to include the qualities of conscious
>> experience into a definition of observation, you must provide a definition
>> of observe that properly qualitative interpretation of any abstracted
>> representations into whatever it is that has a redness quality being
>> observed.
>>
>> I imagine a simple-minded engineer working to design a perfect glutamate
>> detection system that can't be fooled into giving a false response by any
>> other not glutamate substance or system.  It is certainly possible that
>> some complex set of functions or physical behavior, like glutamate, is one
>> and the same as something we can experience as a complex redness, right and
>> that nothing else will have the same physical function or quality?
>>
>> Once your simple minded engineered glutamate detector is working, it will
>> never find anything that is glutamate in the rods and wires engineered to
>> simulate glutamate in an abstracted way.  Also, without having the correct
>> translation hardware, you will not be able to interpret any abstracted
>> representations of glutamate (redness), as if it was glutamate (redness),
>> let alone, think it is real glutamate (real redness).
>>
>>
>> And of course, when you neural substitute the glutamate and its detector,
>> out for some simulation of the same, the neural substitutuion fallacy
>> should be obvious.  It will only work when you completely swap out the
>> entire detection system with something else that knows how to properly
>> interpret that which is not glutamate, as if it was representing it.  Only
>> then will it be *observably the same behavior*.  But nobody will claim that
>> your simulation has any real glutamate being used for representations of
>> knowledge - glutamate being something that physically functions identically
>> with glutamate (or redness) without any hardware interpretation mechanism
>> being required.
>>
>
>
>
>
> --
> Stathis Papaioannou
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20170206/5292779a/attachment.html>


More information about the extropy-chat mailing list