[ExI] More thoughts on sentient computers

Giulio Prisco giulio at gmail.com
Tue Feb 21 07:22:28 UTC 2023


On Mon, Feb 20, 2023 at 4:43 PM Jason Resch via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
>
>
> On Fri, Feb 17, 2023 at 2:28 AM Giulio Prisco via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>>
>> Turing Church newsletter. More thoughts on sentient computers. Perhaps
>> digital computers can be sentient after all, with their own type of
>> consciousness and free will.
>> https://www.turingchurch.com/p/more-thoughts-on-sentient-computers
>> _______________________________________________
>
>
> Hi Giulio,
>
> Very nice article.
>

Thanks Jason!

> I would say the Turing Test sits at the limits of empirical testability in the problem of Other Minds. If tests of knowledge, intelligence, probing thoughts, interactions, tests of understanding, etc. cannot detect the presence of a mind, then what else could? I have never seen any test that is more powerful, so if the Turing Test is insufficient, if testing for identical  behavior between two identical minds is not enough to verify the presence of consciousness (either in both or in neither) I would think that all tests are insufficient, and there is no third-person objective test of consciousness. (This may be so, but it would not be a fault of Turing's Test, but rather I think due to fundamental limits of knowability imposed by the fact that no observer is ever directly acquainted with external reality (as everything could be a dream or illusion).
>
> ChatGPT in current incarnations may be limited, but the algorithm that underlies it is all that is necessary to achieve general intelligence. That is to say, all intelligence comes down to predicting the next element of a sequence. See for example, the algorithm for universe artificial intelligence ( https://en.wikipedia.org/wiki/AIXI which uses just such a mechanism). To understand why this kind of predictive capacity leads to universal general intelligence, consider that in order to predict the next most likely sequence of an output requires building general models of all kinds of systems. If I provide a GPT with a list of chess moves, and ask what is the next best chess move to follow in this list, then somewhere in its model is something that understands chess playing. If I provide it a program in Python and ask it to rewrite the program in Java, then somewhere in it are models of both the python and java programming languages. Trained on enough data, and provided with enough memory, I see no fundamental limits to what a GPT could learn to do or ultimately be capable of.
>
> Regarding "passive" vs. "active" consciousness. Any presumed passivity of consciousness quickly disappears whenever one turns attention to the fact that they are conscious or talks about their consciousness. The moment one stops to say "I am conscious." or "I am seeing red right now." or "I am in pain.", then their conscious perceptions, their thoughts and feelings, have already taken on a casual and active role. It is no longer possible to explain the behavior of the system without factoring in the causes underlying those statements to be made, causes which may involve the presence of conscious states. Here is a good write up of the difficulties one inevitably encounters if one tries to separate consciousness from the behavior of talking about consciousness: https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies
>

This is a very interesting observation. Is this a case of Gödelian
infinite regress in a system that reflects upon inself? Does it imply
that the future of a system, which contains agents that think/act upon
the system, is necessarily non-computable from the inside? I'm looking
for strong arguments for this.

> Regarding the relationship between quantum mechanics and consciousness, I do not see any mechanism by which the randomness of quantum mechanics could affect the properties or capabilities of the contained minds. I view quantum mechanics as introducing a fork() to a process ( https://en.wikipedia.org/wiki/Fork_(system_call) ). The entire system (of all processes) can be simulated deterministically, by copying the whole state, mutating a variable through every possible value it may have, then continuing the computation. Seen at this level, (much like the level at which many-worlds conceive of QM) QM is fully deterministic. Eliminating the other branches by saying they don't exist (ala Copenhagen), in my view, does not and cannot add anything to the capacities of those minds within any branch. It is equivalent to killing all but one of the forked processes randomly. But how can that affect the properties of the computations performed within any one forked process, which are by definition isolated and unaffected by the goings-on in the other forked processes?
>
> (Note: I do think consciousness and quantum mechanics are related, but it is not that QM explains consciousness, but the reverse, consciousness (our status as observers) explains QM, as I detail here: https://alwaysasking.com/why-does-anything-exist/#Why_Quantum_Mechanics )
>
> Further, regarding randomness in our computers, many modern CPUs have instructions called RD_SEED and RD_RAND which are based on hardware random number generators, typically thermal noise, which may ultimately be affected by quantum unpredictable effects. Would you say that an AI using such a hardware instruction would be sentient, while one using a pseudorandom number generator ( https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator ) would not?
>

I had exactly this example in a previous longer draft of this post!
(then I just wrote "AIs interact with the rest of the world, and
therefore participate in the global dance and inherit the lack of
Laplacian determinism of the rest of the world"). Yes, I don't see
strong reasons to differentiate between (apparently) random effects in
the wet brain and silicon. Pseudorandom numbers are not "apparently
random" enough.

> On free will, I like you, take the compatibilist view. I would say, determinism is not only compatible with implementing an agent's will, but it is a requirement if that agent's will is to be implemented with a high degree of fidelity. Non-determinateness, of any kind, functions only to introduce errors and undermine the fidelity of the system, and thereby drift away from a true representation of some agent's will. But then, where does unpredictability come from? I think the answer is simply that many computations, especially sophisticated and complex ones, are chaotic in nature. There are no analytic technique to compute and predict their future states, they must be simulated (or emulated) to work out their future computational states. This is as true for a brain as it is for a computer program simulating a brain. The only way to see what one will do is to play it out (either in vivo or in silico). Thus, the actions of such a process are not only unpredictable to the entity itself, but also any other entities around it, and even a God-like mind. The only way God (or the universe) could know what you would do in such a situation would be to simulate you to such a sufficient level of accuracy that it would in effect, reinstate you and your consciousness. Thus your own mind and conscious states are indispensable to the whole operation. The universe cannot unfold without bringing your consciousness into the picture, and God, or Omega (in Newcomb's paradox) likewise cannot figure out what you will do without also invoking your consciousness. This chaotic unpredictably, I think, is sufficient to explain the unpredictability of conscious agents or complex programs, without having to introduce fundamental randomness into the lower layers of the computation or the substrate.
>

This concept of free will based on Wolfram's computational
irreducibility is *almost* good enough for me, but here I'm proposing
a stronger version.

This is in the paywalled part of my post. Here it is:

The conventional definition of determinism is that the future is
determined by the present with causal influences limited by the speed
of light, which take time to propagate in space. But another
definition of determinism is that the universe computes itself “all at
once” globally and self-consistently - but not necessarily time after
time (see 1, 2, 3).

Emily Adlam says that the course of history is determined by “laws
which apply to the whole of spacetime all at once.”

“In such a theory, the result of a measurement at a given time can
depend on global facts even if there is no record of those facts in
the state of the world immediately prior to the measurement, and
therefore events at different times can have a direct influence on one
another without any mediation. Furthermore, an event at a given time
will usually depend not only on events in the past but also on events
in the future, so retrocausality emerges naturally within this global
picture… In such a theory, events at a given time are certainly in
some sense ‘caused’ by future events, since each part of the history
is dependent on all other parts of the history...”

Everything dances with everything else before and beyond space and
time, which themselves emerge from the global dance (see 4, 5). There
may well be one and only one universe compatible with a set of global
constraints, but this doesn’t mean that the past alone determines the
future, or that we can see all global constraints from our place in
space and time.

This opens the door to a concept of free will derived from John
Wheeler’s conceptual summary of general relativity:

“Spacetime tells matter how to move; matter tells spacetime how to curve.”

Wheeler’s self-consistent feedback loop between the motion of matter
and the geometry of spacetime is a deterministic process in the
conventional sense of Laplace only if we assume that we can always
follow the evolution of the universe deterministically from its state
at one time, for example in the past. But this is not the case in
general relativity, which suggests that the universe is deterministic
only in a global sense.

If what I do is uniquely determined by the overall structure of
reality but not uniquely determined by initial conditions in the past
then, yes, the structure of reality determines what I do, but what I
do determines the structure of reality in turn, in a self-consistent
loop. This deterministic loop includes free will. I first encountered
this idea in Tim Palmer’s book, then in Emily Adlam’s works.

This is a distributed form of free will. It isn’t that I have
autonomous free will - it is that I am part of universal free will
(this parallels the idea that we are conscious because we are part of
universal consciousness). It makes sense to think that my choices have
more weight in the parts of the universe that are closer to me in
space and time (e.g. my own brain here and now) - but remember that
space and time are derived concepts, so perhaps better to say that the
parts of the universe where my choices have more weight are closer to
me.

So I’m an active agent with free will because I’m part of the global
dance, and I’m sentient because I’m a conscious dancer (we don’t need
to distinguish between active and passive consciousness anymore,
because everything is active).

But wait a sec - exactly the same things can be said of a conscious
digital computer. A digital computer is part of the global dance just
like me, and interacts with the rest of the world just like me. So if
a digital computer can be said to be conscious, then it is sentient.

AIs interact with the rest of the world, and therefore participate in
the global dance and inherit the lack of Laplacian determinism of the
rest of the world.

For example, an external input very close to a treshhold can fall
randomly on one or the other side of the edge. Humans provide very
sensitive external inputs on the edge, not only during operations of
an AI but also during development and training. For example, recent
news amplified by Elon Musk on Twitter suggest that ChatGPT has a
strong political bias.

Elon Musk @elonmusk
@disclosetv Extremely concerning
12:21 AM ∙ Feb 12, 2023
________________________________
122,026Likes7,190Retweets

There are countless ways for developers to inject their own political
or other biases in AIs, even unconsciously and even unpredictably, for
example by selecting training data.


> Note that this is just how I see things, and is not to say my view is right or that other views are not valid. I would of course welcome any discussion, criticism, or questions on these ideas or others related to these topics.
>
> Jason
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat



More information about the extropy-chat mailing list