[ExI] More thoughts on sentient computers

Giovanni Santostasi gsantostasi at gmail.com
Thu Feb 23 01:58:17 UTC 2023


Jason,
The Newcomb paradox is mildly interesting. But the perceived depthness of
it is all in the word game that AGAIN philosophers are so good at. I'm so
glad I'm a physicist and not a philosopher (we are better philosophers than
philosophers but we stopped calling ourselves that given the bad name
philosophers gave to philosophy). The false depth of this so-called paradox
comes from a sophistry that is the special case of the predictor being
infallible. In that case all kinds of paradoxes come up and "deep"
conversations about free will, time machines and so on ensue.
In all the other cases one can actually write a code to determine, given
the predictor success rate, what is the best choice from a statistical
point of view.
Nothing deep there.
So the only issue is if we can have an infallible predictor and the answer
is not. It is not even necessary to invoke QM for that because just the
idea of propagation of errors from finite information is enough. Even in
predicting the stability of the solar system many million of years from now
we will need to know the current position of planets to basically an
infinite level of precision given all the nonlinear interactions in the
system. If one has the discipline to do without these absolute abstractions
(basically creationist ideas based on concepts like a perfect god) of
perfect knowledge, perfect understanding then one realizes that these
philosophical riddles are not deep but bs (same thing with qualia,
philosophical zombies and so on). No wonder this paradox has attracted
William Craig Lane's attention.
Giovanni






On Wed, Feb 22, 2023 at 5:41 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Wed, Feb 22, 2023, 3:46 AM Giulio Prisco via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Tue, Feb 21, 2023 at 8:44 PM Jason Resch via extropy-chat
>> <extropy-chat at lists.extropy.org> wrote:
>> >
>> >
>> >
>> > On Tue, Feb 21, 2023 at 1:24 AM Giulio Prisco via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>> >>
>> >> On Mon, Feb 20, 2023 at 4:43 PM Jason Resch via extropy-chat
>> >> <extropy-chat at lists.extropy.org> wrote:
>> >> >
>> >> >
>> >> >
>> >> > On Fri, Feb 17, 2023 at 2:28 AM Giulio Prisco via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>> >> >>
>> >> >> Turing Church newsletter. More thoughts on sentient computers.
>> Perhaps
>> >> >> digital computers can be sentient after all, with their own type of
>> >> >> consciousness and free will.
>> >> >> https://www.turingchurch.com/p/more-thoughts-on-sentient-computers
>> >> >> _______________________________________________
>> >> >
>> >> >
>> >> > Hi Giulio,
>> >> >
>> >> > Very nice article.
>> >> >
>> >>
>> >> Thanks Jason!
>> >>
>> >> > I would say the Turing Test sits at the limits of empirical
>> testability in the problem of Other Minds. If tests of knowledge,
>> intelligence, probing thoughts, interactions, tests of understanding, etc.
>> cannot detect the presence of a mind, then what else could? I have never
>> seen any test that is more powerful, so if the Turing Test is insufficient,
>> if testing for identical  behavior between two identical minds is not
>> enough to verify the presence of consciousness (either in both or in
>> neither) I would think that all tests are insufficient, and there is no
>> third-person objective test of consciousness. (This may be so, but it would
>> not be a fault of Turing's Test, but rather I think due to fundamental
>> limits of knowability imposed by the fact that no observer is ever directly
>> acquainted with external reality (as everything could be a dream or
>> illusion).
>> >> >
>> >> > ChatGPT in current incarnations may be limited, but the algorithm
>> that underlies it is all that is necessary to achieve general intelligence.
>> That is to say, all intelligence comes down to predicting the next element
>> of a sequence. See for example, the algorithm for universe artificial
>> intelligence ( https://en.wikipedia.org/wiki/AIXI which uses just such a
>> mechanism). To understand why this kind of predictive capacity leads to
>> universal general intelligence, consider that in order to predict the next
>> most likely sequence of an output requires building general models of all
>> kinds of systems. If I provide a GPT with a list of chess moves, and ask
>> what is the next best chess move to follow in this list, then somewhere in
>> its model is something that understands chess playing. If I provide it a
>> program in Python and ask it to rewrite the program in Java, then somewhere
>> in it are models of both the python and java programming languages. Trained
>> on enough data, and provided with enough memory, I see no fundamental
>> limits to what a GPT could learn to do or ultimately be capable of.
>> >> >
>> >> > Regarding "passive" vs. "active" consciousness. Any presumed
>> passivity of consciousness quickly disappears whenever one turns attention
>> to the fact that they are conscious or talks about their consciousness. The
>> moment one stops to say "I am conscious." or "I am seeing red right now."
>> or "I am in pain.", then their conscious perceptions, their thoughts and
>> feelings, have already taken on a casual and active role. It is no longer
>> possible to explain the behavior of the system without factoring in the
>> causes underlying those statements to be made, causes which may involve the
>> presence of conscious states. Here is a good write up of the difficulties
>> one inevitably encounters if one tries to separate consciousness from the
>> behavior of talking about consciousness:
>> https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies
>> >> >
>> >>
>> >> This is a very interesting observation. Is this a case of Gödelian
>> >> infinite regress in a system that reflects upon inself? Does it imply
>> >> that the future of a system, which contains agents that think/act upon
>> >> the system, is necessarily non-computable from the inside? I'm looking
>> >> for strong arguments for this.
>> >
>> >
>> > I do think that Gödelian incompleteness can help explain some of the
>> mysterious aspects of consciousness, such as the incommunicability of
>> qualia. It is related to the limits introduced by self-reference, and
>> recursion, and the limits of communicability and understanding that are
>> always present between two distinct systems. For example, as a
>> “knower/experiencer”, you can only know one thing, which is what it is like
>> to be you in this exact moment. You can never know what it is like to be
>> someone else, without being that someone else. Then if you are that someone
>> else, you can no longer know what it was like to be your former self. There
>> is an inherent limitation in knowing.
>> >
>> >  Here are some quotes and references which expound on this idea:
>> >
>> >
>> > “While it is true that up until this century, science was exclusively
>> concerned with things which can be readily distinguished from their human
>> observers–such as oxygen and carbon, light and heat, stars and planets,
>> accelerations and orbits, and so on–this phase of science was a necessary
>> preclude to the more modern phase, in which life itself has come under
>> investigating. Step by step, inexorably, “Western” science has moved
>> towards investigation of the human mind–which is to say, of the observer.
>> Artificial Intelligence research is the furthest step so far along that
>> route. Before AI came along, there were two major previews of the strange
>> consequences of mixing subject and object in science. One was the
>> revolution of quantum mechanics, with its epistemological problems
>> involving the interference of the observer with the observed. The other was
>> the mixing of subject and object in metamathematics, beginning with Gödel’s
>> Theorem and moving through all the other limitative Theorems we have
>> discussed.”
>> > -- Douglas Hofstadter in "Godel Escher Bach" (1979)
>> >
>> >
>> >
>> > “In a sense, Gödel’s Theorem is a mathematical analogue of the fact
>> that I cannot understand what it is like not to like chocolate, or to be a
>> bat, except by an infinite sequence of ever-more-accurate simulation
>> processes that converge toward, but never reach, emulation. I am trapped
>> inside myself and therefore can’t see how other systems are. Gödel’s
>> Theorem follows from a consequence of the general fact: I am trapped inside
>> myself and therefore can’t see how other systems see me. Thus the
>> objectivity-subjectivity dilemmas that Nagel has sharply posed are somehow
>> related to epistemological problems in both mathematical logic, and as we
>> saw earlier, the foundations of physics.”
>> > -- Douglas Hofstadter and Daniel Dennett in "The Mind’s I" (1981)
>> >
>> >
>> > “Note that in this view there is no “inner eye” that watches all the
>> activity and “feels” the system; instead the system’s state itself
>> represents the feelings. The legendary “little person” who would play that
>> role would have to have yet a smaller “inner eye,” after all, and that
>> would lead to infinite regress of the worst and silliest kind. In this kind
>> of system, contrariwise, the self-awareness comes from the system’s
>> intricately intertwined responses to both external and internal stimuli.
>> This kind of pattern illustrates a general thesis: “Mind is a pattern
>> perceived by mind.” This is perhaps circular, but it is neither vicious nor
>> paradoxical.”
>> > -- Douglas Hofstadter and Daniel Dennett in "The Mind’s I" (1981)
>> >
>> >
>> >
>> > “In the end, we are self-perceiving, self-inventing, locked-in mirages
>> that are little miracles of self-reference.”
>> > — Douglas Hofstadter, I Am a Strange Loop, p. 363
>> >
>> >
>> >
>> > “There was a man who said though,
>> > it seems that I know that I know,
>> > what I would like to see,
>> > is the eye that knows me,
>> > when I know that I know that I know.
>> > -
>> >
>> > This is the human problem, we know that we know.”
>> > -- Alan Watts
>> >
>> > "Divide the brain into two parts. A and B. Connect the A-brain’s inputs
>> and outputs to the real world–so it can sense what happens there. But don’t
>> connect the B-brain to the outer world at all; instead, connect it so that
>> the A-brain is the B-brain’s world!”
>> > -- Marvin Minsky in "Society of Mind" (1986)
>> >
>> >
>> > “So far, we have learned nothing truly new about brains. These results
>> are mere corollaries of known mathematical results; they are applicable to
>> systems much simpler than brains - even television sets contain some
>> feedback loops. Hence we have not yet learned anything new about
>> consciousness. We have only learned how to apply Gödel's theorem to
>> machines in amusing (or repulsive?) new ways. [...]
>> > In this paper I have argued that human brains can have logical
>> properties which are not directly accessible to third-person investigation
>> but nevertheless are accessible (at least in a weak sense) to the brain
>> itself. It is important to remember that these properties are not
>> metaphysically mysterious in any way; they are simply logical properties of
>> neural systems. They are natural properties, arising entirely from the
>> processing of information by various subsystems of the brain. The existence
>> of such properties can pose no threat to the scientific understanding of
>> the mind. [...]
>> > The existence of these logical properties contradicts the widespread
>> feeling that information processing in a machine cannot have features
>> inaccessible to objective observers. But despite this offense against
>> intuition, these findings support a view of first-person access which may
>> be far more congenial to a scientific understanding of the mind than the
>> alternative views that first-person character is either irreducible or
>> unreal. Our conclusion suggests a way to bypass an important obstacle to a
>> reductionistic account of consciousness. Indeed, it suggests that
>> consciousness may be reducible to information processing even if experience
>> does have genuine first-person features.”
>> > -- Mark F. Sharlow in "Can Machines Have First-Person Properties?"
>> (2001)
>> >
>> >
>> >
>> > “Looked at this way, Gödel’s proof suggests–though by no means does it
>> prove!–that there could be some high-level way of viewing the mind/brain,
>> involving concepts which do not appear on lower levels, and that this level
>> might have explanatory power that does not exist–not even in principle–on
>> lower levels. It would mean that some facts could be explained on the high
>> level quite easily, but not on lower levels at all.”
>> > -- Douglas Hofstadter in "Godel Escher Bach" (1979)
>> >
>> >
>> > “To put it very simply, it becomes a question largely of who pushes
>> whom around in the population of causal forces that occupy the cranium.
>> There exists within the human cranium a whole world of diverse causal
>> forces; what is more, there are forces within forces within forces, as in
>> no other cubic half-foot of universe that we know. At the lowermost levels
>> in this system are those local aggregates of subnuclear particles confined
>> within the neutrons and protons of their respective atomic nuclei. These
>> individuals, of course, don't have very much to say about what goes on in
>> the affairs of the brain. Like the atomic nucleus and its associated
>> electrons, the subnuclear and other atomic elements are "moleculebound" for
>> the most part, and get hauled and pushed around by the larger spatial and
>> configurational forces of the whole molecule.
>> > Similarly the molecular elements in the brain are themselves pretty
>> well bound up, moved, and ordered about by the enveloping properties of the
>> cells within which they are located. Along with their internal atomic and
>> subnuclear parts, the brain molecules are obliged to submit to a course of
>> activity in time and space that is determined very largely by the overall
>> dynamic and spatial properties of the whole brain cell as an entity. Even
>> the brain cells, however, with their long fibers and impulse conducting
>> elements, do not have very much to say either about when or in what time
>> pattern, for example, they are going to fire their messages. The firing
>> orders come from a higher command. [...]
>> > Near the apex of this compound command system in the brain we find
>> ideas. In the brain model proposed here, the causal potency of an idea, or
>> an ideal, becomes just as real as that of a molecule, a cell, or a nerve
>> impulse. Ideas cause ideas and help evolve new ideas. They interact with
>> each other and with other mental forces in the same brain, in neighboring
>> brains, and in distant, foreign brains. And they also interact with real
>> consequence upon the external surroundings to produce in toto an explosive
>> advance in evolution on this globe far beyond anything known before,
>> including the emergence of the living cell.”
>> > -- Roger Sperry in "Mind, Brain, and Humanist Values" (1966)
>> >
>> >
>> >
>> > “In order to deal with the full richness of the brain/mind system, we
>> will have to be able to slip between levels comfortably. Moreover, we will
>> have to admit various types of “causality”: ways in which an event at one
>> level of description can “cause” events at other levels to happen.
>> Sometimes event A will be said to “cause” event B simply for the reason
>> that the one is a translation, on another level of description, of the
>> other. Sometimes “cause” will have its usual meaning: physical causality.
>> Both types of causality–and perhaps some more–will have to be admitted in
>> any explanation of mind, for we will have to admit causes that propagate
>> both upwards and downloads in the Tangled Hierarchy of mentality, just a in
>> the Central Dogmap.”
>> > -- Douglas Hofstadter in "Godel Escher Bach" (1979)
>> >
>> >
>> > "If one looks at the catalog of conscious experiences that I presented
>> earlier, the experiences in question are never described in terms of their
>> intrinsic qualities. Rather, I used expressions such as “the smell of
>> freshly baked bread,” “the patterns one gets when closing one’s eyes,” and
>> so on. Even with a term like “green sensation,” reference is effectively
>> pinned down in extrinsic terms. When we learn the term “green sensation,”
>> it is effectively by ostension–we learn to apply it to the sort of
>> experience caused by grass, trees, and so on. Generally, insofar as we have
>> communicable phenomenal categories at all, they are defined with respect
>> either to their typical external associations or to an associated kind of
>> psychological state.”
>> > -- David Chalmers in "The Conscious Mind" (1996)
>> >
>> >
>> > “Because what you are, in your inmost being, escapes your examination
>> in rather the same way that you can’t look directly into your own eyes
>> without using a mirror, you can’t bite your own teeth, you can’t taste your
>> own tongue, and you can’t touch the tip of this finger with the tip of this
>> finger. And that’s why there’s always an element of profound mystery in the
>> problem of who we are.”
>> > -- Alan Watts in “THE TAO OF PHILOSOPHY" (1965)
>> >
>> >
>> > “You could not see the seer of seeing. You could not hear the hearer of
>> hearing. You could not think the thinker of thinking. You could not
>> understand the undestander of understanding.”
>> > -- Brihadaranyaka Upanishad (900 - 600 B.C.)
>> >
>> >
>> >
>> > "Consciousness cannot be accounted for in physical terms.  For
>> consciousness is absolutely fundamental. It cannot be accounted for in
>> terms of anything else.”
>> > Erwin Schrödinger in interview (1931)
>> >
>> >
>> >
>> > “If understanding a thing is arriving at a familiarizing metaphor for
>> it, then we can see that there always will be a difficulty in understanding
>> consciousness. For it should be immediately apparent that there is not and
>> cannot be anything in our immediate experience that is like immediate
>> experience itself. There is therefore a sense in which we shall never be
>> able to understand consciousness in the same way that we can understand
>> things that we are conscious of.”
>> > -- Julian Jaynes in "The Origin of Consciousness in the Breakdown of
>> the Bicameral Mind" (1976)
>> >
>>
>> Thanks Jason for this great list of quotes. I was familiar with most
>> but not all. I especially like Alan Watts' "you can’t look directly
>> into your own eyes without using a mirror, you can’t bite your own
>> teeth, you can’t taste your own tongue, and you can’t touch the tip of
>> this finger with the tip of this finger." These quotes are poetic,
>> inspiring, and sound deeply true. However, I still miss a rigorous
>> formulation of the concept somewhat analogous to the proofs of Gödel,
>> Turing, Chaitin etc. I'm writing a little something about this.
>>
>
> I may have been thinking about this and I think there may be a few
> examples you could base such a proof in, though I don't know if anyone has
> written about these before or tried to write a proof on this.
>
> The first such example is related to a variation of Newcomb's paradox.
> https://en.m.wikipedia.org/wiki/Newcomb%27s_paradox
> In this variation it is asked, what about the case where the boxes are
> transparent?
>
> If you familiarize yourself with all the nuances of Newcomb's paradox in
> relation to free will, the use of transparent paradoxes seems to create a
> paradox in that it based a course of action on presumed behavior which was
> dependent on that course of action already chosen.
>
> Another example: two research scientists in two different universes each
> have access to powerful computers capable of simulating whole universes.
> Let's call these two universes A and B. By chance, scientist A (Alice)
> happens to discover universe B in her simulations, and scientist B (Bob)
> happens to discover universe A in his simulations. They also both discover
> each other. That is, Alice notices Bob inside her simulation, while Bob
> discovers Alice in his simulation. Both scientists drop what they are doing
> and fetch a pad of paper and write, "Hey there, I noticed they you are
> simulating me, salutations! My name is ..." (And they write their names).
> Both go back to run their simulation forward a few seconds and see the
> other has written them a greeting! They both hurriedly go back to the pad
> and Alice writes "Since my name is alphabetically first, I will write a
> first message and then you can write back to me once you've seen it. I will
> wait 60 seconds then check to see what you have written." While
> coincidentally at the same time Bob writes "since your name is
> alphabetically first, why don't you say something first and I will respond
> to it." Bob goes back to his computer and smiles when he sees Alice had the
> same idea. He returns to the pad and writes "Pleased to meet you Alice!" In
> this way they communicate back and forth and carry on a deep and meaningful
> inter-universe communication.
>
> But can such a conversation take place? Or does A simulating B simulating
> A create a hall of mirrors infinite recursion that insoluble? Is it
> impossible in the same way the behavior of the program in the halting
> problem could not be predicted, when given a deviant version of itself?
>
> I think you could potentially build a more rigorous proof based on these
> ideas, but I haven't proven that! ☺️
>
> Jason
>
>
>
>
>
>
>
>> >
>> >
>> >
>> >
>> >
>> >>
>> >>
>> >> > Regarding the relationship between quantum mechanics and
>> consciousness, I do not see any mechanism by which the randomness of
>> quantum mechanics could affect the properties or capabilities of the
>> contained minds. I view quantum mechanics as introducing a fork() to a
>> process ( https://en.wikipedia.org/wiki/Fork_(system_call) ). The entire
>> system (of all processes) can be simulated deterministically, by copying
>> the whole state, mutating a variable through every possible value it may
>> have, then continuing the computation. Seen at this level, (much like the
>> level at which many-worlds conceive of QM) QM is fully deterministic.
>> Eliminating the other branches by saying they don't exist (ala Copenhagen),
>> in my view, does not and cannot add anything to the capacities of those
>> minds within any branch. It is equivalent to killing all but one of the
>> forked processes randomly. But how can that affect the properties of the
>> computations performed within any one forked process, which are by
>> definition isolated and unaffected by the goings-on in the other forked
>> processes?
>> >> >
>> >> > (Note: I do think consciousness and quantum mechanics are related,
>> but it is not that QM explains consciousness, but the reverse,
>> consciousness (our status as observers) explains QM, as I detail here:
>> https://alwaysasking.com/why-does-anything-exist/#Why_Quantum_Mechanics )
>> >> >
>> >> > Further, regarding randomness in our computers, many modern CPUs
>> have instructions called RD_SEED and RD_RAND which are based on hardware
>> random number generators, typically thermal noise, which may ultimately be
>> affected by quantum unpredictable effects. Would you say that an AI using
>> such a hardware instruction would be sentient, while one using a
>> pseudorandom number generator (
>> https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
>> ) would not?
>> >> >
>> >>
>> >> I had exactly this example in a previous longer draft of this post!
>> >> (then I just wrote "AIs interact with the rest of the world, and
>> >> therefore participate in the global dance and inherit the lack of
>> >> Laplacian determinism of the rest of the world"). Yes, I don't see
>> >> strong reasons to differentiate between (apparently) random effects in
>> >> the wet brain and silicon. Pseudorandom numbers are not "apparently
>> >> random" enough.
>> >
>> >
>> > Very interesting that we both thought that.
>> >
>> > My professional background is in computer science and cryptography. One
>> property of cryptographically secure pseudorandom number generators
>> (CSPRNGs) is that a CSPRNG with an internal state of N-bits is impossible
>> to differentiate from the output of a true (say quantum) source of
>> randomness without expending on the order of 2^N computations. I think this
>> has ramifications for the Turing Test, at least assuming the use of true
>> vs. pseudorandomness makes any difference in observable output/behavior, it
>> would not be detectable in theory without massive computational cost. Is
>> this what you are saying, or are you saying that the behavior would no be
>> distinguishable, but the internal view for the machine using a CSPRNG would
>> be different (or absent)?
>> >
>>
>> Pseudorandomness is fully deterministic in Laplace's past -> future
>> sense, but true randomness is NOT fully deterministic in Laplace's
>> past -> future sense (though it can be deterministic in a global
>> sense, which is one of the points I'm making). In other words a
>> sequence (even an infinite sequence) of pseudorandom numbers is
>> entirely specified by initial conditions at a given time in a small
>> part of the universe, but a sequence of true random numbers is either
>> really random or globally but nonlocally deterministic in space and
>> time.
>>
>> What difference does this difference make? I think the behavior of an
>> AI driven by pseudorandom (as opposed to truly random) inputs may well
>> be indistinguishable from that of a sentient agent, AND its (passive)
>> internal view / consciousness may well feel the same, BUT this AI
>> wouldn't be a sentient agent with consciousness and free will (one
>> that participates in the overall dynamics of reality).
>>
>> > I do think there may be something to the notion of "belonging to the
>> same universe". Markus P. Müller speaks of "probabilistic zombies" that
>> result in the case of a computationally generated observer which is fully
>> causally isolated from the physics of the simulator:
>> https://arxiv.org/abs/1712.01826 However, I think the argument could be
>> made that you can "rescue" them by seeding their simulated environment with
>> quantum randomness from our own universe. Coincidentally, this was
>> described in a science fiction piece from 1996:
>> http://frombob.to/you/aconvers.html
>> >
>> >
>> > "The Ship on which I live contains a rather large number of random
>> number generators. Many of the algorithms running on the Ship need "random"
>> inputs, and these generators provide the necessary degree of randomness.
>> Many of the generators are dithered with noise gathered from the physical
>> world, which helps some people to feel better about themselves."
>> >
>> >
>> >
>> >>
>> >>
>> >> > On free will, I like you, take the compatibilist view. I would say,
>> determinism is not only compatible with implementing an agent's will, but
>> it is a requirement if that agent's will is to be implemented with a high
>> degree of fidelity. Non-determinateness, of any kind, functions only to
>> introduce errors and undermine the fidelity of the system, and thereby
>> drift away from a true representation of some agent's will. But then, where
>> does unpredictability come from? I think the answer is simply that many
>> computations, especially sophisticated and complex ones, are chaotic in
>> nature. There are no analytic technique to compute and predict their future
>> states, they must be simulated (or emulated) to work out their future
>> computational states. This is as true for a brain as it is for a computer
>> program simulating a brain. The only way to see what one will do is to play
>> it out (either in vivo or in silico). Thus, the actions of such a process
>> are not only unpredictable to the entity itself, but also any other
>> entities around it, and even a God-like mind. The only way God (or the
>> universe) could know what you would do in such a situation would be to
>> simulate you to such a sufficient level of accuracy that it would in
>> effect, reinstate you and your consciousness. Thus your own mind and
>> conscious states are indispensable to the whole operation. The universe
>> cannot unfold without bringing your consciousness into the picture, and
>> God, or Omega (in Newcomb's paradox) likewise cannot figure out what you
>> will do without also invoking your consciousness. This chaotic
>> unpredictably, I think, is sufficient to explain the unpredictability of
>> conscious agents or complex programs, without having to introduce
>> fundamental randomness into the lower layers of the computation or the
>> substrate.
>> >> >
>> >>
>> >> This concept of free will based on Wolfram's computational
>> >> irreducibility is *almost* good enough for me, but here I'm proposing
>> >> a stronger version.
>> >>
>> >> This is in the paywalled part of my post. Here it is:
>> >>
>> >> The conventional definition of determinism is that the future is
>> >> determined by the present with causal influences limited by the speed
>> >> of light, which take time to propagate in space. But another
>> >> definition of determinism is that the universe computes itself “all at
>> >> once” globally and self-consistently - but not necessarily time after
>> >> time (see 1, 2, 3).
>> >>
>> >> Emily Adlam says that the course of history is determined by “laws
>> >> which apply to the whole of spacetime all at once.”
>> >>
>> >> “In such a theory, the result of a measurement at a given time can
>> >> depend on global facts even if there is no record of those facts in
>> >> the state of the world immediately prior to the measurement, and
>> >> therefore events at different times can have a direct influence on one
>> >> another without any mediation. Furthermore, an event at a given time
>> >> will usually depend not only on events in the past but also on events
>> >> in the future, so retrocausality emerges naturally within this global
>> >> picture… In such a theory, events at a given time are certainly in
>> >> some sense ‘caused’ by future events, since each part of the history
>> >> is dependent on all other parts of the history...”
>> >
>> >
>> > I think where retrocausality can be said to exist, it makes sense to
>> identify the source with the observer's mind state. That is to say, an
>> observer exists within a spectrum of universes (perhaps infinitely many of
>> them) consistent and compatible with her existence. Given the limited
>> information and memory available to any observer, the state of the universe
>> she is within will always remain not fully specified. Hawking seemed to
>> embrace a view like this:
>> >
>> > "The top down approach we have described leads to a profoundly
>> different view of cosmology, and the relation between cause and effect. Top
>> down cosmology is a framework in which one essentially traces the histories
>> backwards, from a spacelike surface at the present time. The no boundary
>> histories of the universe thus depend on what is being observed, contrary
>> to the usual idea that the universe has a unique, observer independent
>> history. In some sense no boundary initial conditions represent a sum over
>> all possible initial states."
>> > -- Stephen Hawking and Thomas Hertog in “Populating the landscape: A
>> top-down approach” (2006)
>> >
>> >
>> > I would say it is not only the state of the universe that is
>> unspecified, but even the laws of physics themselves. We might say that the
>> 20th digit of the fine-structure constant remains in flux until such time
>> as we gain a capacity to measure it. Paul Davies describes something like
>> that here:
>> >
>> > "It is an attempt to explain the Goldilocks factor by appealing to
>> cosmic self-consistency: the bio-friendly universe explains life even as
>> life explains the bio-friendly universe. […] Cosmic bio-friendliness is
>> therefore the result of a sort of quantum post-selection effect extended to
>> the very laws of physics themselves."
>> > -- Paul Davies in “The flexi-laws of physics” (2007)
>> >
>> >
>> >>
>> >>
>> >> Everything dances with everything else before and beyond space and
>> >> time, which themselves emerge from the global dance (see 4, 5). There
>> >> may well be one and only one universe compatible with a set of global
>> >> constraints, but this doesn’t mean that the past alone determines the
>> >> future, or that we can see all global constraints from our place in
>> >> space and time.
>> >>
>> >> This opens the door to a concept of free will derived from John
>> >> Wheeler’s conceptual summary of general relativity:
>> >>
>> >> “Spacetime tells matter how to move; matter tells spacetime how to
>> curve.”
>> >>
>> >> Wheeler’s self-consistent feedback loop between the motion of matter
>> >> and the geometry of spacetime is a deterministic process in the
>> >> conventional sense of Laplace only if we assume that we can always
>> >> follow the evolution of the universe deterministically from its state
>> >> at one time, for example in the past. But this is not the case in
>> >> general relativity, which suggests that the universe is deterministic
>> >> only in a global sense.
>> >
>> >
>> > It's impossible for more fundamental reasons. Attempting to record
>> information about microscopic states (copying a microscopic state of say a
>> particle position, to a larger macroscopic state, say a magnetic region of
>> a hard drive) will itself produce more entropy and further, therefore are
>> not enough macroscopic states available in the universe to reliably encode
>> and record all the microscopic states. This is responsible for our
>> perceived arrow of time: https://www.youtube.com/watch?v=vgYQglmYU-8 It
>> also explains why we cannot know (or remember) anything about the future.
>> It is because storing a memory (overwriting bits) requires an expenditure
>> of energy by Landauer's principle and energy can only be expended in the
>> direction of time in which entropy increases (and it increases in the
>> direction of time in which the universe expands as this expansion increases
>> the maximum possible entropy of the universe).
>> >
>> >
>> >>
>> >>
>> >> If what I do is uniquely determined by the overall structure of
>> >> reality but not uniquely determined by initial conditions in the past
>> >> then, yes, the structure of reality determines what I do, but what I
>> >> do determines the structure of reality in turn, in a self-consistent
>> >> loop. This deterministic loop includes free will. I first encountered
>> >> this idea in Tim Palmer’s book, then in Emily Adlam’s works.
>> >>
>> >> This is a distributed form of free will. It isn’t that I have
>> >> autonomous free will - it is that I am part of universal free will
>> >> (this parallels the idea that we are conscious because we are part of
>> >> universal consciousness). It makes sense to think that my choices have
>> >> more weight in the parts of the universe that are closer to me in
>> >> space and time (e.g. my own brain here and now) - but remember that
>> >> space and time are derived concepts, so perhaps better to say that the
>> >> parts of the universe where my choices have more weight are closer to
>> >> me.
>> >
>> >
>> > That is interesting. I am not familiar with Palmer's or Adlam's works.
>> Do you have a reference? I am planning to write an article on free will.
>> > I do subscribe to the idea of a universal consciousness, but I am not
>> sure how that relates to a universal free will.
>> >
>> > A question I like to ask of those who use the term "free will", to
>> ensure we are talking about the same thing, is:
>> > What is it that you are proposing that one's will must be "free" from?
>> > Or in other words, what more does a "free will" have that a "will" does
>> not have?
>> > Specifying these things can help to focus the discussion.
>> >
>> >>
>> >> So I’m an active agent with free will because I’m part of the global
>> >> dance, and I’m sentient because I’m a conscious dancer (we don’t need
>> >> to distinguish between active and passive consciousness anymore,
>> >> because everything is active).
>> >>
>> >> But wait a sec - exactly the same things can be said of a conscious
>> >> digital computer. A digital computer is part of the global dance just
>> >> like me, and interacts with the rest of the world just like me. So if
>> >> a digital computer can be said to be conscious, then it is sentient.
>> >>
>> >
>> > I agree. I prefer to define consciousness as sentience, where sentience
>> is anything having awareness of any kind (regardless of its content or its
>> simplicity). That is, if an entity experiences, then it is conscious. If it
>> has feelings, perceptions, or sensations, then it is conscious. If there is
>> something it is like to be that entity, or if it has a "point of view,"
>> then that entity is conscious. There may be value in using terms like
>> self-consciousness or self-awareness or other kinds of consciousness, but I
>> view those as mere special cases of basic consciousness, and all the
>> mysteries of consciousness seem to exist in the basic level, so there's
>> usually no reason to invoke higher orders of consciousness.
>> >
>> >
>> >>
>> >> AIs interact with the rest of the world, and therefore participate in
>> >> the global dance and inherit the lack of Laplacian determinism of the
>> >> rest of the world.
>> >>
>> >> For example, an external input very close to a treshhold can fall
>> >> randomly on one or the other side of the edge. Humans provide very
>> >> sensitive external inputs on the edge, not only during operations of
>> >> an AI but also during development and training. For example, recent
>> >> news amplified by Elon Musk on Twitter suggest that ChatGPT has a
>> >> strong political bias.
>> >
>> >
>> > Is there value in linking free will and consciousness together? I do
>> think that an inability to anticipate in advance its own actions is a
>> property inherent to any computational process of appreciable complexity,
>> and so we might say this self-unpredictability is inherent to conscious
>> processes, but I also see that consciousness and awareness can exist in
>> people who are not exercising their will at all. They may be in a purely
>> meditative state, or they may be suffering from a locked-in syndrome and be
>> unable to perform any actions or exercise their will in any way. So would
>> you say there can be thought-moments of pure experience in which will (free
>> or not) does not enter the picture at all? (Is this the passive/active
>> distinction you referenced earlier?)
>> >
>> > Jason
>> > _______________________________________________
>> > extropy-chat mailing list
>> > extropy-chat at lists.extropy.org
>> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230222/c930e3d5/attachment-0001.htm>


More information about the extropy-chat mailing list