[ExI] Can philosophers produce scientific knowledge?

Jason Resch jasonresch at gmail.com
Sat May 8 17:41:56 UTC 2021

To support Stathis's position:

Functionalism requires 2 things:
1. that the physics used by the brain is computable
2. That nothing in the brain requires an infinite amount of information

For 1: No known law of physics is uncomputable. Some argue wave function
collapse is incomputable, but you can simulate all possibilities (i.e. many
worlds) either on a quantum computer or on a classical computer with
exponential slowdown.

For 2: The brain (and rest of the body) is created from the finite
information of the DNA (~700 MB) together with information learned through
the senses which is also finite (~Gigabit / second). Moreover, quantum
mechanics imposes a strict upper bound (The Bekenstein bound) on the
information content of physical systems if finite energy and volume.

So the only argument against the logical possibility of function requires
posing some new non-computable physics (Like Penrose), or suggesting that
the brain contains an infinite amount of information.

If physics is computable and the brain's information content is finite,
then in principle an appropriately programmed computer could perfectly
emulate the behavior of the brain.

This appears confirmed so far, as detailed brain simulations using existing
knowledge of the biochemical properties of neurons have replicated
behaviors and patterns of firing across large brain regions. See, for
example, the Human Brain Project's results with mouse brains and whisker
stimulation: https://youtu.be/ldXEuUVkDuw


On Sat, May 8, 2021, 12:05 PM Stathis Papaioannou via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Sun, 9 May 2021 at 02:35, Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> Hi Stathis,
>> On Sat, May 8, 2021 at 7:46 AM Stathis Papaioannou via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>> The link takes me to the video, but then the video doesn’t play, perhaps
>>> because I am using a mobile device. Anyway, as I explained, the logical
>>> argument is independent of any particular physical theory. We could say
>>> that the brain works via magic from the god Zeus. If the behavior of the
>>> brain could be replicated via different magic from the god Apollo, then the
>>> associated consciousness would necessarily also be replicated. It isn’t
>>> possible to fix consciousness to a particular substrate, a particular
>>> physics or a particular process.
>> Yes http://slehar.com/wwwRel/HarmonicGestalt.mp4 is just an mp4 file,
>> best downloaded, then watched.
>> The entire substitution argument, and your logic, is most definitely
>> dependent on the assumption that the neurons being replaced, one at a time,
>> via the method described, are: "independent computational elements that
>> communicate by electrical signals propagated down axons and collaterals and
>> transmitted to other neurons through chemical synapses."  The "classic
>> neuron doctrine"
> It is presented that way in Chalmers’ paper, but the argument applies to
> any brain process. It is an argument from the definition of consciousness,
> showing that if consciousness is substrate dependent, then the idea of
> consciousness is absurd, because it would be logically possible to change
> it radically without the subject or anyone else realising.
> This is the simple definition of what abstract computers of today are "independent
>> computational elements that communicate by electrical signals transmitted
>> down wires to other neurons through chemical synapses."  any such system
>> requires interpretation or transuding systems from any one representation,
>> to something different representing the downstream link, in order to
>> preserve the same abstract meaning, otherwise it wouldn't be "substrate
>> independent".  THAT is what the neuro substitution is working on, and it
>> can't work on anything different than that type of computation.  There is
>> nothing it is intrinsically like for any such abstracted away from physical
>> reality computational system, by design
> There is no necessity to preserve any abstract meaning anywhere in the
> chain as long as the output is identical for all inputs. The internal
> processing can be mangled a million ways, like one operating system
> emulating another operating system, and the consciousness will be preserved
> provided the emulation is done properly. The only empirical test we could
> do is to confirm that the emulation is actually done properly: that the
> consciousness is preserved is a deduction, not separately subject to
> experimental confirmation.
> The prediction is that consciousness computation is something completely
>> different.  It is "computationally bound elemental intrinsic qualities like
>> redness and greenness"  There must be something that has an intrinsic
>> redness quality (let's call whatever it is a red Lego block) and something
>> with a different greenness intrinsic quality (a green Lego block) and you
>> must be able to bind these together into some kind of computational
>> standing wave, representing information in a substrate quality dependent
>> way.  The system must be able to be consciously aware of when one of the
>> red Lego blocks changes to a green Lego block, in a way that it is
>> dependent on those particular qualities, otherwise it isn't functioning
>> correctly.
> The standing wave must have some ultimate effect on the output of the
> system, i.e. on the muscles. If this is replicated on some other way, the
> consciousness will also be replicated. So you would have to claim that it
> is logically impossible to remove locate the effect of the standing wave
> (or whatever it may be) on the muscles. Logical impossibility is a very
> strong restriction, meaning that not even a miracle could do it.
> By definition, it is simply a logic impossibility to do any kind of neuro
>> substitution on any such system, and your "logical" argument simply doesn't
>> apply, or at best isn't logically possible, by definition.
>> *"This Paradigm is Wrong!"*
>>                                     -- Steven Lehar
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> --
> Stathis Papaioannou
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20210508/546bb90a/attachment.htm>

More information about the extropy-chat mailing list