[ExI] Can philosophers produce scientific knowledge?

Stathis Papaioannou stathisp at gmail.com
Sat May 8 17:04:23 UTC 2021


On Sun, 9 May 2021 at 02:35, Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Hi Stathis,
>
> On Sat, May 8, 2021 at 7:46 AM Stathis Papaioannou via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> The link takes me to the video, but then the video doesn’t play, perhaps
>> because I am using a mobile device. Anyway, as I explained, the logical
>> argument is independent of any particular physical theory. We could say
>> that the brain works via magic from the god Zeus. If the behavior of the
>> brain could be replicated via different magic from the god Apollo, then the
>> associated consciousness would necessarily also be replicated. It isn’t
>> possible to fix consciousness to a particular substrate, a particular
>> physics or a particular process.
>>
>
> Yes http://slehar.com/wwwRel/HarmonicGestalt.mp4 is just an mp4 file,
> best downloaded, then watched.
>
> The entire substitution argument, and your logic, is most definitely
> dependent on the assumption that the neurons being replaced, one at a time,
> via the method described, are: "independent computational elements that
> communicate by electrical signals propagated down axons and collaterals and
> transmitted to other neurons through chemical synapses."  The "classic
> neuron doctrine"
>

It is presented that way in Chalmers’ paper, but the argument applies to
any brain process. It is an argument from the definition of consciousness,
showing that if consciousness is substrate dependent, then the idea of
consciousness is absurd, because it would be logically possible to change
it radically without the subject or anyone else realising.

This is the simple definition of what abstract computers of today are
"independent
> computational elements that communicate by electrical signals transmitted
> down wires to other neurons through chemical synapses."  any such system
> requires interpretation or transuding systems from any one representation,
> to something different representing the downstream link, in order to
> preserve the same abstract meaning, otherwise it wouldn't be "substrate
> independent".  THAT is what the neuro substitution is working on, and it
> can't work on anything different than that type of computation.  There is
> nothing it is intrinsically like for any such abstracted away from physical
> reality computational system, by design
>

There is no necessity to preserve any abstract meaning anywhere in the
chain as long as the output is identical for all inputs. The internal
processing can be mangled a million ways, like one operating system
emulating another operating system, and the consciousness will be preserved
provided the emulation is done properly. The only empirical test we could
do is to confirm that the emulation is actually done properly: that the
consciousness is preserved is a deduction, not separately subject to
experimental confirmation.

The prediction is that consciousness computation is something completely
> different.  It is "computationally bound elemental intrinsic qualities like
> redness and greenness"  There must be something that has an intrinsic
> redness quality (let's call whatever it is a red Lego block) and something
> with a different greenness intrinsic quality (a green Lego block) and you
> must be able to bind these together into some kind of computational
> standing wave, representing information in a substrate quality dependent
> way.  The system must be able to be consciously aware of when one of the
> red Lego blocks changes to a green Lego block, in a way that it is
> dependent on those particular qualities, otherwise it isn't functioning
> correctly.
>

The standing wave must have some ultimate effect on the output of the
system, i.e. on the muscles. If this is replicated on some other way, the
consciousness will also be replicated. So you would have to claim that it
is logically impossible to remove locate the effect of the standing wave
(or whatever it may be) on the muscles. Logical impossibility is a very
strong restriction, meaning that not even a miracle could do it.

By definition, it is simply a logic impossibility to do any kind of neuro
> substitution on any such system, and your "logical" argument simply doesn't
> apply, or at best isn't logically possible, by definition.
>
> *"This Paradigm is Wrong!"*
>                                     -- Steven Lehar
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20210509/96de9b05/attachment-0001.htm>


More information about the extropy-chat mailing list