[ExI] Is Artificial Life Conscious?

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Fri Apr 29 05:35:36 UTC 2022


On Wed, Apr 27, 2022 at 10:57 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> I agree it doesn't seem like passive/idle information is conscious. Any
> string of information could be interpreted in any of an infinite number of
> ways.
>
> This shows that something must *happen* in physical
>> reality for consciousness to exist.
>>
>
> I think while something must happen, I am open to viewing it more
> generally: there must be counterfactual relations: "if this, then that,"
> but also: "if not this, then not that." This is something all recordings
> lack but all live instances of information processing possess.
>

### What if you had a record of the detailed states of a large algorithmic
process as it was responding to inputs, for example a detailed,
synapse-by-synapse model of a human brain verbally describing a visual
input. Let's posit that the digital model was validated as being able to
respond to real-human-life inputs with verbal and motor responses
indistinguishable from actual human responses, so we might see it as a
human mind upload. Let's also posit that the visual input is not real-time,
instead it is a file that is stored inside the input/output routines that
accompany the synaptic model.

Is this register-by-register and time-step by time-step record of synaptic
and axonal activity conscious when stored in RAM? In a book? Or does
consciousness happen only as you run the synaptic model processing the
input file on a digital computer that actually dissipates energy and does
physical things as it creates the mathematical representations of synapses?
And what if you run the same synaptic model on two computers? Is the
consciousness double? Is there something special about dissipation of
energy, or about causal processes that add something special to the
digital, mathematical entities represented by such processes?

I struggle to understand what is happening. I have a feeling that two
instances of a simple and pure mathematical entity (a triangle or an
equation) under consideration by two mathematicians are one and the same
but then two pure mathematical entities that purport to reflect a mind
(like the synapse-level model of a brain) being run on two computers are
separate and presumably independently conscious. Something doesn't fit
here. Maybe there is something special about the physical world that imbues
models of mathematical entities contained in the physical world with a
different level of existence from the Platonic ideal level. Or maybe
different areas of the Platonic world are imbued with different properties,
such as consciousness, even as they copy other parts of the Platonic world.

Maybe what matters is that the physical representations of mathematical
entities are subject to the limits of physics, even for the simplest
mathematical objects we can imagine. Since two mathematicians thinking
about squares, or two computers running the same program, can in principle
diverge at any point because of physical imperfections imposed on them by
the uncertainty inherent in any physical, quantum physics process, and so
they are different even when they repeat identical mathematical steps. In
this way quantum uncertainty would play into consciousness, although in a
very trivial, tautological fashion.

What do you think about it?

Rafal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220429/b7ca3afb/attachment.htm>


More information about the extropy-chat mailing list