[ExI] Is Artificial Life Conscious?
Jason Resch
jasonresch at gmail.com
Fri Apr 29 13:31:00 UTC 2022
On Fri, Apr 29, 2022, 1:36 AM Rafal Smigrodzki via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> On Wed, Apr 27, 2022 at 10:57 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> I agree it doesn't seem like passive/idle information is conscious. Any
>> string of information could be interpreted in any of an infinite number of
>> ways.
>>
>> This shows that something must *happen* in physical
>>> reality for consciousness to exist.
>>>
>>
>> I think while something must happen, I am open to viewing it more
>> generally: there must be counterfactual relations: "if this, then that,"
>> but also: "if not this, then not that." This is something all recordings
>> lack but all live instances of information processing possess.
>>
>
> ### What if you had a record of the detailed states of a large algorithmic
> process as it was responding to inputs, for example a detailed,
> synapse-by-synapse model of a human brain verbally describing a visual
> input. Let's posit that the digital model was validated as being able to
> respond to real-human-life inputs with verbal and motor responses
> indistinguishable from actual human responses, so we might see it as a
> human mind upload. Let's also posit that the visual input is not real-time,
> instead it is a file that is stored inside the input/output routines that
> accompany the synaptic model.
>
> Is this register-by-register and time-step by time-step record of synaptic
> and axonal activity conscious when stored in RAM? In a book?
>
A record, even a highly detailed one as you describe, I don't believe is
conscious. For if you alter any bit/bits in that record, say the bits
representing visual information sent from the optic nerves, none of those
changes are reflected in any of the neuron states downstream from that
modification, so in what sense are they consciousness of other information,
or the firing of neighboring neurons, or the visual data coming in, etc.
within the representation?
There is no response to any change and so I conclude there is no awareness
of any of that information. This is why I think counterfactuals are
necessary. If you make a relevant change to the inputs, that change must be
reflected in the right ways throughout the rest of the system, otherwise
you aren't dealing with something that has the right functional relations
and organizations. If no other bits change, then you're dealing with a bit
string that is a record only, it is devoid of all functional relations.
There's a thought experiment about this called the filmed graph argument by
Bruno Marchal and also one by Tim Mauldin called the Mount Olympia thought
experiment. They reach different conclusions so both are worth analyzing.
Or does consciousness happen only as you run the synaptic model processing
> the input file on a digital computer that actually dissipates energy and
> does physical things as it creates the mathematical representations of
> synapses?
>
I don't think "running" is the right word either, as relativity reveals
objective time as an illusion. So we must accept the plausibility of
consciousness in timeless four dimensionalism. It then must be the
structure of relations and counterfactuals implied by laws (whether they be
physical or mathematical or some other physics in some other universe) that
are necessary for consciousness.
And what if you run the same synaptic model on two computers? Is the
> consciousness double?
>
Nick Bostrom has a paper arguing that it does create a duplicate with more
"weight", Arnold Zuboff argues for a position called Unificationism in
which there is only one unique mind even if run twice, and there's no
change in its "weight".
If reality is infinite and all possible minds and conscious experiences
exist, then if Unificationism is true we should expect to be experiencing a
totally random (think snow on a TV) kind of experience now, since there's
so many more random than ordered unique conscious experiences. Zuboff uses
this to argue that reality is not infinite. But if you believe reality is
infinite it can be used as a basis to reject Unificationism.
Is there something special about dissipation of energy,
>
This is just a reflection of the fact that in physics, information is
conserved. If you overwrite/erase a bit in a computer memory, that bit has
to go somewhere. In practice, for our current computers, it is leaked into
the environment and this requires leaking energy into the environment as
implied by the Landauer limit. But if no information is erased/overwritten,
which is possible to do in reversible computers (and is in fact necessary
in quantum computers), then you can compute without dissipating any energy
at all. So I conclude dissipating energy is unrelated to computation or
consciousness.
or about causal processes that add something special to the digital,
> mathematical entities represented by such processes?
>
The causality (though I would say relations since causality itself is
poorly understood and poorly defined) is key, I think. If you study a bit
of cryptography (see "one time pad" encryption) you can come to understand
why any bit string can have any meaning. It is therefore meaningless
without the context of it's interpreter.
So to be "informative" we need both information and a system to be informed
by or otherwise interpret that information. Neither by itself is sufficient.
> I struggle to understand what is happening. I have a feeling that two
> instances of a simple and pure mathematical entity (a triangle or an
> equation) under consideration by two mathematicians are one and the same
> but then two pure mathematical entities that purport to reflect a mind
> (like the synapse-level model of a brain) being run on two computers are
> separate and presumably independently conscious. Something doesn't fit
> here.
>
The problem you are referencing is the distinction between types and tokens.
A type is something like "Moby Dick", of which there is only one uniquely
defined type which is that story.
A token is any concrete instance of a given type. For example any
particular book of Moby Dick is a token of the type Moby Dick.
I think you may be asking: should we think of minds as types or tokens? I
think a particular mind at a particular point in time (one
"observer-moment") can be thought of as a type. But across an infinite
universe that mind state or observer moment may have many, (perhaps an
infinite number of) different tokens -- different instantiations in terms
of different brains or computers with uploaded minds -- representing that
type.
So two instances of the same mind being run on two different computers are
independently conscious in the sense that turning either one off doesn't
destroy the type, even if one token is destroyed, just as the story of Moby
Dick isn't destroyed if one book is lost.
The open question to me is: does running two copies increase the likelihood
of finding oneself in that mind state? This is the
Unificationism/Duplicationism debate.
Maybe there is something special about the physical world that imbues
> models of mathematical entities contained in the physical world with a
> different level of existence from the Platonic ideal level.
>
We can't rule out, (especially given all the other fine-tuning coincidences
we observe), that our physics has a special property necessary for
consciousness, but I tend to not think so, given all the problems entailed
by philosophical zombies and zombie worlds -- where we have philosophers of
mind and books about consciousness and exact copies of the conversations
such as in this thread, being written by entities in a universe that has no
conscious. This idea just doesn't seem coherent to me.
Or maybe different areas of the Platonic world are imbued with different
> properties, such as consciousness, even as they copy other parts of the
> Platonic world.
>
As Bruno Marchal points out in his filmed graph thought experiment, if one
accepts mechanism (a.k.a. functionalism, or computationalism), this implies
that platonically existing number relations and computations are sufficient
for consciousness. Therefore consciousness is in a sense more fundamental
than the physical worlds we experience. The physics in a sense, drops out
as the consistent extensions of the infinite indistinguishable computations
defining a particular observer's current mind state.
This is explored in detail by Markus P Mueller, in his paper on deriving
laws of physics from algorithmic information theory. He is able to predict
from these first principles that most observers should find themselves to
be in a universe having simple, but probabilistic laws, with time, and a
point in the past beyond which further retrodiction is impossible.
Indeed we find this to be true of our own physics and universe. I cover
this subject in some detail in my "Why does anything exist?" article (on
AlwaysAsking.com ). I am currently working on an article about
consciousness. The two questions are quite interrelated.
> Maybe what matters is that the physical representations of mathematical
> entities are subject to the limits of physics, even for the simplest
> mathematical objects we can imagine. Since two mathematicians thinking
> about squares, or two computers running the same program, can in principle
> diverge at any point because of physical imperfections imposed on them by
> the uncertainty inherent in any physical, quantum physics process, and so
> they are different even when they repeat identical mathematical steps.
>
Markus Mueller reaches a similar conclusion, saying that computer
simulations of observers may become "probaballistic zombies" unless we feed
in information about our world into that simulation. I've seen others argue
we should maybe feed in quantum noise/randomness into the simulations of
uploaded minds, in case that somehow effects their measure it the diversity
of experience for that mind. This feeds into the
Unificationism/Duplicationism debate.
In this way quantum uncertainty would play into consciousness, although in
> a very trivial, tautological fashion.
>
I think quantum mechanics and consciousness are related but not in the ways
normally described. Some, like Penrose, say quantum mechanics explains
consciousness.
I think it is the other way around: Consciousness explains quantum
mechanics. Russell Standish talks about this in his book "Theory of
Nothing" and his paper, "Why Occam's Razor?". In short, it is the infinite
set of observer states which can diverge upon exposure to new
information/observations that produces our quantum mechanical view.
This is not unlike the Many Minds interpretation of QM, but with
mechanism+Platonism we have an answer to "where do the infinite
pre-existing minds come from?" Which was an open question for the many
minds view.
> What do you think about it?
>
I appreciate your thinking and questions on these topics. They're deep and
lead to some of the most fundamental and relevant questions of our time.
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220429/a7f2f263/attachment-0001.htm>
More information about the extropy-chat
mailing list