[ExI] Fwd: Chalmers

Brent Allsop brent.allsop at gmail.com
Wed Dec 18 13:59:09 UTC 2019


Hi Statish,

My prediction is that very soon after experimentalists start doing
observation of the physics in the brain in a non-qualia blind way, they
will discover which of all their descriptions of physics are a description
of redness.  This will also include the discovery of how computational
binding of redness and greenness is physically achieved.  This will falsify
functionalism, as nobody will ever be able to produce a redness experience,
in a substrate independent way, and it will never be possible to do
computational binding on any such functional redness and greenness, as
required to have composite qualitative conscious experiences.


Your way of thinking is both not falsifiable and not verifiable, resulting
in the impossibly hard problems Chalmers has become famous four claiming
exist.

---------- Forwarded message ---------
From: Brent Allsop <brent.allsop at gmail.com>
Date: Wed, Dec 18, 2019 at 6:44 AM
Subject: Re: [ExI] Chalmers
To: Stathis Papaioannou <stathisp at gmail.com>


This statement is only true (and consciousness becomes impossibly hard to
approach via science because nothing is verifiable or falsifiable) when you
do a neural substitution on a system that does not include the necessary
“computational binding” functionality.  It is not possible for a neural
system, as described in the substitution argument, to have a composite
qualitative experience that includes composite awareness of both redness
and greenness at the same time.  If you can describe to me how any such
system you are doing a neural substitution on can achieve this
functionality, other than “a miracle happens here” I will jump camps from a
materialist to a functionalist.  If you provide this necessary
functionality, all the so called impossibly hard problems of consciousness
Chalmers has become famous for claiming exist are easily resolved as a
simple color problem.

On Tue, Dec 17, 2019 at 9:12 PM Stathis Papaioannou <stathisp at gmail.com>
wrote:

>
>
> On Wed, 18 Dec 2019 at 09:39, Brent Allsop <brent.allsop at gmail.com> wrote:
>
>> But if the argument contains a mistake of logic or slight of hand
>> <https://canonizer.com/topic/79-Neural-Substtn-Fallacy/2#statement>,
>> then this argument for functionalism is falsified, resulting in it being
>> more likely that functionalism IS probably wrong?
>>
>
> If functionalism is wrong then it means that your qualia could change
> radically and you wouldn’t notice, which seems absurd.
>
>> --
> Stathis Papaioannou
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20191218/f6bc12dc/attachment.htm>


More information about the extropy-chat mailing list