<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Apr 29, 2022, 1:36 AM Rafal Smigrodzki via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 27, 2022 at 10:57 AM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto"><br></div><div dir="auto">I agree it doesn't seem like passive/idle information is conscious. Any string of information could be interpreted in any of an infinite number of ways.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">This shows that something must *happen* in physical<br>
reality for consciousness to exist.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I think while something must happen, I am open to viewing it more generally: there must be counterfactual relations: "if this, then that," but also: "if not this, then not that." This is something all recordings lack but all live instances of information processing possess.</div></div></blockquote><div><br></div><div>### What if you had a record of the detailed states of a large algorithmic process as it was responding to inputs, for example a detailed, synapse-by-synapse model of a human brain verbally describing a visual input. Let's posit that the digital model was validated as being able to respond to real-human-life inputs with verbal and motor responses indistinguishable from actual human responses, so we might see it as a human mind upload. Let's also posit that the visual input is not real-time, instead it is a file that is stored inside the input/output routines that accompany the synaptic model.</div><div><br></div><div>Is this register-by-register and time-step by time-step record of synaptic and axonal activity conscious when stored in RAM? In a book? </div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">A record, even a highly detailed one as you describe, I don't believe is conscious. For if you alter any bit/bits in that record, say the bits representing visual information sent from the optic nerves, none of those changes are reflected in any of the neuron states downstream from that modification, so in what sense are they consciousness of other information, or the firing of neighboring neurons, or the visual data coming in, etc. within the representation?</div><div dir="auto"><br></div><div dir="auto">There is no response to any change and so I conclude there is no awareness of any of that information. This is why I think counterfactuals are necessary. If you make a relevant change to the inputs, that change must be reflected in the right ways throughout the rest of the system, otherwise you aren't dealing with something that has the right functional relations and organizations. If no other bits change, then you're dealing with a bit string that is a record only, it is devoid of all functional relations.</div><div dir="auto"><br></div><div dir="auto">There's a thought experiment about this called the filmed graph argument by Bruno Marchal and also one by Tim Mauldin called the Mount Olympia thought experiment. They reach different conclusions so both are worth analyzing.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>Or does consciousness happen only as you run the synaptic model processing the input file on a digital computer that actually dissipates energy and does physical things as it creates the mathematical representations of synapses? </div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I don't think "running" is the right word either, as relativity reveals objective time as an illusion. So we must accept the plausibility of consciousness in timeless four dimensionalism. It then must be the structure of relations and counterfactuals implied by laws (whether they be physical or mathematical or some other physics in some other universe) that are necessary for consciousness.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>And what if you run the same synaptic model on two computers? Is the consciousness double?</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Nick Bostrom has a paper arguing that it does create a duplicate with more "weight", Arnold Zuboff argues for a position called Unificationism in which there is only one unique mind even if run twice, and there's no change in its "weight".</div><div dir="auto"><br></div><div dir="auto">If reality is infinite and all possible minds and conscious experiences exist, then if Unificationism is true we should expect to be experiencing a totally random (think snow on a TV) kind of experience now, since there's so many more random than ordered unique conscious experiences. Zuboff uses this to argue that reality is not infinite. But if you believe reality is infinite it can be used as a basis to reject Unificationism.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> Is there something special about dissipation of energy, </div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">This is just a reflection of the fact that in physics, information is conserved. If you overwrite/erase a bit in a computer memory, that bit has to go somewhere. In practice, for our current computers, it is leaked into the environment and this requires leaking energy into the environment as implied by the Landauer limit. But if no information is erased/overwritten, which is possible to do in reversible computers (and is in fact necessary in quantum computers), then you can compute without dissipating any energy at all. So I conclude dissipating energy is unrelated to computation or consciousness.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>or about causal processes that add something special to the digital, mathematical entities represented by such processes?</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">The causality (though I would say relations since causality itself is poorly understood and poorly defined) is key, I think. If you study a bit of cryptography (see "one time pad" encryption) you can come to understand why any bit string can have any meaning. It is therefore meaningless without the context of it's interpreter.</div><div dir="auto"><br></div><div dir="auto">So to be "informative" we need both information and a system to be informed by or otherwise interpret that information. Neither by itself is sufficient.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>I struggle to understand what is happening. I have a feeling that two instances of a simple and pure mathematical entity (a triangle or an equation) under consideration by two mathematicians are one and the same but then two pure mathematical entities that purport to reflect a mind (like the synapse-level model of a brain) being run on two computers are separate and presumably independently conscious. Something doesn't fit here. </div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">The problem you are referencing is the distinction between types and tokens.</div><div dir="auto"><br></div><div dir="auto">A type is something like "Moby Dick", of which there is only one uniquely defined type which is that story.</div><div dir="auto"><br></div><div dir="auto">A token is any concrete instance of a given type. For example any particular book of Moby Dick is a token of the type Moby Dick.</div><div dir="auto"><br></div><div dir="auto">I think you may be asking: should we think of minds as types or tokens? I think a particular mind at a particular point in time (one "observer-moment") can be thought of as a type. But across an infinite universe that mind state or observer moment may have many, (perhaps an infinite number of) different tokens -- different instantiations in terms of different brains or computers with uploaded minds -- representing that type.</div><div dir="auto"><br></div><div dir="auto">So two instances of the same mind being run on two different computers are independently conscious in the sense that turning either one off doesn't destroy the type, even if one token is destroyed, just as the story of Moby Dick isn't destroyed if one book is lost.</div><div dir="auto"><br></div><div dir="auto">The open question to me is: does running two copies increase the likelihood of finding oneself in that mind state? This is the Unificationism/Duplicationism debate.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>Maybe there is something special about the physical world that imbues models of mathematical entities contained in the physical world with a different level of existence from the Platonic ideal level.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">We can't rule out, (especially given all the other fine-tuning coincidences we observe), that our physics has a special property necessary for consciousness, but I tend to not think so, given all the problems entailed by philosophical zombies and zombie worlds -- where we have philosophers of mind and books about consciousness and exact copies of the conversations such as in this thread, being written by entities in a universe that has no conscious. This idea just doesn't seem coherent to me.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> Or maybe different areas of the Platonic world are imbued with different properties, such as consciousness, even as they copy other parts of the Platonic world.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">As Bruno Marchal points out in his filmed graph thought experiment, if one accepts mechanism (a.k.a. functionalism, or computationalism), this implies that platonically existing number relations and computations are sufficient for consciousness. Therefore consciousness is in a sense more fundamental than the physical worlds we experience. The physics in a sense, drops out as the consistent extensions of the infinite indistinguishable computations defining a particular observer's current mind state.</div><div dir="auto"><br></div><div dir="auto">This is explored in detail by Markus P Mueller, in his paper on deriving laws of physics from algorithmic information theory. He is able to predict from these first principles that most observers should find themselves to be in a universe having simple, but probabilistic laws, with time, and a point in the past beyond which further retrodiction is impossible.</div><div dir="auto"><br></div><div dir="auto">Indeed we find this to be true of our own physics and universe. I cover this subject in some detail in my "Why does anything exist?" article (on AlwaysAsking.com ). I am currently working on an article about consciousness. The two questions are quite interrelated.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>Maybe what matters is that the physical representations of mathematical entities are subject to the limits of physics, even for the simplest mathematical objects we can imagine. Since two mathematicians thinking about squares, or two computers running the same program, can in principle diverge at any point because of physical imperfections imposed on them by the uncertainty inherent in any physical, quantum physics process, and so they are different even when they repeat identical mathematical steps. </div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Markus Mueller reaches a similar conclusion, saying that computer simulations of observers may become "probaballistic zombies" unless we feed in information about our world into that simulation. I've seen others argue we should maybe feed in quantum noise/randomness into the simulations of uploaded minds, in case that somehow effects their measure it the diversity of experience for that mind. This feeds into the Unificationism/Duplicationism debate.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>In this way quantum uncertainty would play into consciousness, although in a very trivial, tautological fashion.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I think quantum mechanics and consciousness are related but not in the ways normally described. Some, like Penrose, say quantum mechanics explains consciousness.</div><div dir="auto"><br></div><div dir="auto">I think it is the other way around: Consciousness explains quantum mechanics. Russell Standish talks about this in his book "Theory of Nothing" and his paper, "Why Occam's Razor?". In short, it is the infinite set of observer states which can diverge upon exposure to new information/observations that produces our quantum mechanical view.</div><div dir="auto"><br></div><div dir="auto">This is not unlike the Many Minds interpretation of QM, but with mechanism+Platonism we have an answer to "where do the infinite pre-existing minds come from?" Which was an open question for the many minds view.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>What do you think about it?</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I appreciate your thinking and questions on these topics. They're deep and lead to some of the most fundamental and relevant questions of our time.</div><div dir="auto"><br></div><div dir="auto">Jason</div></div>