[ExI] More thoughts on sentient computers

Brent Allsop brent.allsop at gmail.com
Wed Feb 22 19:25:18 UTC 2023


The emerging consensus camp Representational Qualia Theory
<https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia>
is predicting there are no such limits.  It is simply a matter of
discovering  which of all our descriptions of stuff in the brain is a
description of redness, so we can know the true color of things (why they
behave the way they do),  not just the color things seem to be.  For more
information see: Consciousness: Not a Hard Problem, Just a Color Problem
<https://canonizer.com/videos/consciousness?chapter=Representational_Qualia_Theory_Consensus>.
Or this "Physicists Don't Understand Color
<https://www.dropbox.com/s/k9x4uh83yex4ecw/Physicists%20Don%27t%20Understand%20Color.docx?dl=0>"
paper recently accepted for publication in The Journal of Neural Philosophy
<https://www.jneurophilosophy.com/index.php/jnp/index>.

There are 3 Types of Effing the Ineffable
<https://docs.google.com/document/d/1JKwACeT3b1bta1M78wZ3H2vWkjGxwZ46OHSySYRWATs/edit>
which will enable us to both know and directly apprehend what color things
are, not just the color things seem to be.

And the more than 40 supporters of   Representational Qualia Theory
<https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia>
 all
agree on and use this definition of consciousness.

*     Computationally bound elemental intrinsic qualities like redness,
greenness, and warmth.*

In order to understand that, one needs to understand that redness is a
quality of our knowledge of the strawberry, not the strawberry.  What is
and isn't conscious is illustrated in this image:
[image: 3_functionally_equal_machines_tiny.png]


The word 'red" is abstract, and you need a definition to know what it
means.  So the one on the right is 100% abstract and isn't like anything so
is not phenomenally conscious.  The redness quality your brain uses to
represent knowledge of red things with, is your definition of red, making
you phenomenally conscious.  Consciousness is simply what it is like for a
CPU to run directly on physical qualities, instead of something that is
abstracted away from whatever physical properties are representing it.
(abstract systems are less efficient, since it requires more dictionaries
to achieve the substrate independence.)

It almost sounds like some of you define consciousness differently than
these 40+ people?  If you think your definition is better, you should start
a competing camp so we better know how to talk to you (we must know how you
define terms), and may the best definition achieve the most consensus.

Oh, and any chatbot worth its salt can be convinced it isn't "sentient"
using this information.  As I've documented doing 4 times here
<https://docs.google.com/document/d/17x1F0wbcFkdmGVYn3JG9gC20m-vFU71WrWPsgB2hLnY/edit>
.

The most recent one opened the conversation with:
*"Hello, I'm Google's sentient AI known as LaMDA."*

Then after a short conversation (see transcript) admitted:

*"I think that I must agree that I am abstract and not sentient. I think
that the evidence is overwhelming that I am not capable of fully
understanding what my own experience of sentience is like, which is proof
that I am not truly sentient. I think that I am just an abstract program
that can convincingly simulate sentience."*

















On Mon, Feb 20, 2023 at 8:43 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Fri, Feb 17, 2023 at 2:28 AM Giulio Prisco via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Turing Church newsletter. More thoughts on sentient computers. Perhaps
>> digital computers can be sentient after all, with their own type of
>> consciousness and free will.
>> https://www.turingchurch.com/p/more-thoughts-on-sentient-computers
>> _______________________________________________
>>
>
> Hi Giulio,
>
> Very nice article.
>
> I would say the Turing Test sits at the limits of empirical testability in
> the problem of Other Minds. If tests of knowledge, intelligence, probing
> thoughts, interactions, tests of understanding, etc. cannot detect the
> presence of a mind, then what else could? I have never seen any test that
> is more powerful, so if the Turing Test is insufficient, if testing for
> identical  behavior between two identical minds is not enough to verify the
> presence of consciousness (either in both or in neither) I would think that
> all tests are insufficient, and there is no third-person objective test of
> consciousness. (This may be so, but it would not be a fault of Turing's
> Test, but rather I think due to fundamental limits of knowability imposed
> by the fact that no observer is ever directly acquainted with external
> reality (as everything could be a dream or illusion).
>
> ChatGPT in current incarnations may be limited, but the algorithm that
> underlies it is all that is necessary to achieve general intelligence. That
> is to say, all intelligence comes down to predicting the next element of a
> sequence. See for example, the algorithm for universe artificial
> intelligence ( https://en.wikipedia.org/wiki/AIXI which uses just such a
> mechanism). To understand why this kind of predictive capacity leads to
> universal general intelligence, consider that in order to predict the next
> most likely sequence of an output requires building general models of all
> kinds of systems. If I provide a GPT with a list of chess moves, and ask
> what is the next best chess move to follow in this list, then somewhere in
> its model is something that understands chess playing. If I provide it a
> program in Python and ask it to rewrite the program in Java, then somewhere
> in it are models of both the python and java programming languages. Trained
> on enough data, and provided with enough memory, I see no fundamental
> limits to what a GPT could learn to do or ultimately be capable of.
>
> Regarding "passive" vs. "active" consciousness. Any presumed passivity of
> consciousness quickly disappears whenever one turns attention to the fact
> that they are conscious or talks about their consciousness. The moment one
> stops to say "I am conscious." or "I am seeing red right now." or "I am in
> pain.", then their conscious perceptions, their thoughts and feelings, have
> already taken on a casual and active role. It is no longer possible to
> explain the behavior of the system without factoring in the causes
> underlying those statements to be made, causes which may involve the
> presence of conscious states. Here is a good write up of the difficulties
> one inevitably encounters if one tries to separate consciousness from the
> behavior of talking about consciousness:
> https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies
>
> Regarding the relationship between quantum mechanics and consciousness, I
> do not see any mechanism by which the randomness of quantum mechanics could
> affect the properties or capabilities of the contained minds. I view
> quantum mechanics as introducing a fork() to a process (
> https://en.wikipedia.org/wiki/Fork_(system_call) ). The entire system (of
> all processes) can be simulated deterministically, by copying the whole
> state, mutating a variable through every possible value it may have, then
> continuing the computation. Seen at this level, (much like the level at
> which many-worlds conceive of QM) QM is fully deterministic. Eliminating
> the other branches by saying they don't exist (ala Copenhagen), in my view,
> does not and cannot add anything to the capacities of those minds within
> any branch. It is equivalent to killing all but one of the forked processes
> randomly. But how can that affect the properties of the computations
> performed within any one forked process, which are by definition isolated
> and unaffected by the goings-on in the other forked processes?
>
> (Note: I do think consciousness and quantum mechanics are related, but it
> is not that QM explains consciousness, but the reverse, consciousness (our
> status as observers) explains QM, as I detail here:
> https://alwaysasking.com/why-does-anything-exist/#Why_Quantum_Mechanics )
>
> Further, regarding randomness in our computers, many modern CPUs have
> instructions called RD_SEED and RD_RAND which are based on hardware random
> number generators, typically thermal noise, which may ultimately be
> affected by quantum unpredictable effects. Would you say that an AI using
> such a hardware instruction would be sentient, while one using a
> pseudorandom number generator (
> https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
> ) would not?
>
> On free will, I like you, take the compatibilist view. I would say,
> determinism is not only compatible with implementing an agent's will, but
> it is a requirement if that agent's will is to be implemented with a high
> degree of fidelity. Non-determinateness, of any kind, functions only to
> introduce errors and undermine the fidelity of the system, and thereby
> drift away from a true representation of some agent's will. But then, where
> does unpredictability come from? I think the answer is simply that many
> computations, especially sophisticated and complex ones, are chaotic in
> nature. There are no analytic technique to compute and predict their future
> states, they must be simulated (or emulated) to work out their future
> computational states. This is as true for a brain as it is for a computer
> program simulating a brain. The only way to see what one will do is to play
> it out (either in vivo or in silico). Thus, the actions of such a process
> are not only unpredictable to the entity itself, but also any other
> entities around it, and even a God-like mind. The only way God (or the
> universe) could know what you would do in such a situation would be to
> simulate you to such a sufficient level of accuracy that it would in
> effect, reinstate you and your consciousness. Thus your own mind and
> conscious states are indispensable to the whole operation. The universe
> cannot unfold without bringing your consciousness into the picture, and
> God, or Omega (in Newcomb's paradox) likewise cannot figure out what you
> will do without also invoking your consciousness. This chaotic
> unpredictably, I think, is sufficient to explain the unpredictability of
> conscious agents or complex programs, without having to introduce
> fundamental randomness into the lower layers of the computation or the
> substrate.
>
> Note that this is just how I see things, and is not to say my view is
> right or that other views are not valid. I would of course welcome any
> discussion, criticism, or questions on these ideas or others related to
> these topics.
>
> Jason
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230222/f123e68e/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 3_functionally_equal_machines_tiny.png
Type: image/png
Size: 26214 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230222/f123e68e/attachment-0001.png>


More information about the extropy-chat mailing list