[ExI] My guesses about GPTs consciousness

Jason Resch jasonresch at gmail.com
Mon Apr 17 12:33:58 UTC 2023


On Mon, Apr 17, 2023, 1:56 AM Rafal Smigrodzki via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Sun, Apr 16, 2023 at 3:30 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>
>> On Sun, Apr 16, 2023 at 12:24 AM Rafal Smigrodzki via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>
>>
>>> This is one of the reasons why I do not subscribe to e.g. panpsychism
>>> and do not believe all behaving animals have consciousness.
>>>
>>
>> About where would you draw the line on the phylogenetic tree?
>>
>
> ### About where you start having a global workspace in the brain. So,
> protozoans, corals, nematodes are out.
>

Do you distinguish conscious from awareness or consider them the same
thing? If different, what functionality/behavior does conscious add beyond
awareness? Would you say nematodes have awareness?

Of all the animal phyla I would guess only Chordata, Mollusca and
> Arthropoda might possibly have some consciousness,
>

This is interesting:
https://youtu.be/Ij4pdf49bxw
I think some arthropods, in particular jumping spiders are far smarter than
they're given credit for.

Consider: if a lobster knows its own claw isn't food (even when lobsters
eat each other) doesn't that require some degree of a concept of self as
different from the environment?

Similarly cuttlefish are quite sophisticated compared to other mollusks.
But I see your point regard simpler creatures.

Plants, though they operate on a different time scale than can learn, adapt
and even communicate with other plants. Maybe their experience is more
distributed and less integrated though. I am not sure we understand the
mechanisms. But if they are conscious/aware in some way, it shows
numerology isn't required.

The amount of information that is possible for simple creatures to know and
respond to is quite limited compared to higher animals. But I lean towards
the idea that one can be aware of as little as one bit. This would see
suggest even a paramecium might be conscious, as you see them respond to
and try to escape from being devoured by amoeba. There is some processing
of information involved there and responding differently in one case vs.
another, to me indicates the presence of some abstract computation
involving at least one bit of information.



and I am not so sure about the arthropods. Among chordates I would guess
> only the smartest fish, smartest amphibians, smartest reptiles but most if
> not all mammals and birds.
>
> Of course, consciousness is not an on-off quality: At the level of a
> goldfish, if it has any consciousness, it's a pale shadow of the human
> mind, even the mind of a newborn baby.
>

Yes. But just as we can see the gulf between us and the goldfish, there may
be an equivalent gulf between the mind of a goldfish and the mind of a
nematode or paramecium (if things go that low). We can easily forget that
our brains are somewhere in the neighborhood of 10^18 operations/second,
there are ~17 orders of magnitude between us and a pocket calculator in
terms of processing ability. Likewise there may be many orders of magnitude
below us in consciousness.

You mentioned in another thread that there may be many levels of
> consciousness going beyond human, and I agree, most likely we are still at
> the low end of the spectrum of consciousness that can be implemented in our
> physical world.
> ------------------------------
>
>>
>>
>>> There is a whole lot of complicated information processing that can
>>> guide goal-oriented behavior that can happen without conscious experience.
>>>
>>
>> I think we need to justify our assumption of cases where no
>> consciousness is present. When things lack an ability to talk, or remember,
>> it can easily be taken as a case where there is no consciousness present.
>> But to me this isn't enough to reach any firm conclusion as to the presence
>> or absence of a mind.
>>
>
> ### Yes, absolutely. We can work backwards from the neural correlates of
> consciousness in humans, look for analogous structures in other entities
> (animals, AI) and if we see neither an analogue nor the kind of complex
> behavior that in humans is associated with conscious processing, then we
> are reasonably justified in believing the entity is not conscious in the
> way a human is.
>  -------------------------------
>

They can work for detecting probable human like consciousness. But I don't
think it works generally for all classes of consciousness.

For example, does the abstract processing of an ant colony manifest a
consciousness? What about the network of communication within the roots of
a rainforest, or all the interactions and thoughts within a company? Can we
rule out the presence of a mind when ant colonies, rainforests, and
companies manifest complex emergent behavior?

Our brains use neurons to process information, but we know there are many
ways information can be processed. I think the only thing we can rely on
are the behaviors manifested by such processes, and to the degree that we
can track it, the forms of the information and the manners in which they
are processed.

Consider for example the movements of rocks on this infinite dessert
according to some rules would implement every mind and consciousness you
have ever known: https://xkcd.com/505/

It seems quite absurd at first until we think we're a bunch of cells
squirting fluids at each other, or at a lower level a bunch of particles
bumping around.


>>
>>>
>>> 1) Almost all of the cognitive heavy lifting that leads to GTP's answers
>>> takes place during training. The billions of parameters that determine
>>> GTP-4 intelligence were set in silicon last year.
>>>
>>
>> This is true for humans too. It takes years of training to get a Phd, but
>> only an hour to prove it during an examination.
>>
>
> ### Every time you access your memories there is an activation and
> potential remodeling of the underlying networks. GPT does not modify its
> parameters (I think?).
>

That's my understanding.



> -----------------------------
>
>> Given this, I am not sure we can say that GPT-4, in its current
>> architecture and implementation, is entirely devoid of a memory, or a
>> loop/recursion.
>>
>> I am anxious to hear your opinion though.
>>
>>
>>
> ### GPT does have a bit of short term memory but when I mention the looped
> activation I mean something a bit different: Whenever you are consciously
> aware of a quale (color, emotion, abstract math concept) there is a high
> frequency sustained activation that connects a specialized neural network
> (occipital/lower temporal cortex, ventral prefrontal cortex, parietal
> cortex) with the attentional/workspace networks in the prefrontal cortex.
> As far as I know GPT does not have a sustained neural activity, it has just
> discontinuous jumps of activity after each prompt. This must feel different
> from our continuous experience. Even when you meditate and empty your mind
> there is a hum of just being there and GPT probably does not have this
> experience.
> -------------------------------
>

As I view it, GPT perceives (in each session) an ever growing buffer of
input, some of which it adds to, and some of which comes in from a source
unknown (the human user, which to it, we might consider the environment),
this buffer keeps growing until it reaches 30,000 symbols and then one edge
trails off as new content enters from one side. So it "sees" a sliding
window of text, perceiving up to 30,000 symbols at a time, and occasionally
it is allowed to write and add new content to this window. I might consider
it like a human with super high res vision able to see 60 pages of text at
once, with a pen and able to write down more on a blank page, but upon
filling it, having to discard the oldest page it is able to "see". GPT
finds it is even able to influence its "environment" based on how GPT
interacts with it. As the text it writes out, can steer to some extent, the
text that comes in from the user.



>
>> If GPT is conscious, it's more like being in a delirium, flashing in and
>>> out of focus rather than having a continuous stream of consciousness.
>>>
>>
>> Each GPT prompt is a separate thread of awareness, but what does it feel
>> like? It would not feel as though it was losing or gaining consciousness
>> between each prompt. There is the concept of the "Unfelt time gap", we
>> don't/can't experience the time in the periods we are not conscious. Thus
>> GPT, if it is conscious, does not see, feel, or know of any gap in its
>> subjectivity, but rather it would experience a continuous back and forth of
>> conversation, a continuous stream of new inputs coming in as soon and as
>> fast as it finishes generating a reply.
>>
>
> ### Yes, something like that. It's probably quite weird.
> ------------------------------------
>
>
>>
>> We might ask: is our own consciousness truly a stream? Or is it a series
>> of discrete observer-moments, constructed in a rapid succession involving
>> sensory input collected over some discrete chunk of time? Could we tell the
>> difference?
>>
>
> ### Really hard to tell. I guess we are smoothing over a discrete process
> which runs updates a few times per second judging by the EEG frequencies
> that correlate with consciousness, rather than having a truly continuous
> stream. My guess is that GPTs consciousness is much more chunky than ours.
> Have you ever experienced tiny jumps in visual updating while trying to
> stay awake when very drowsy? This happens to me sometimes. GPT might have
> this happening all the time.
>  ---------------------------------
>

Is that a bit like a strobe light? I am not sure if I've experienced that
or not.



>>
>>
>>> 3) GPT does not have proprioceptive and visceral sensory input, does not
>>> have drives controlled by body states (hunger, thirst, lust). It has
>>> cognitive models of such drives, just as we can imagine, but not
>>> experience, the interests of other animals. So GPT could fake the verbal
>>> output of a human responding to instinctive drives but it does not
>>> experience them.
>>>
>>
>> Though a sufficiently accurate model, I believe, would experience
>> whatever the target of the modeling is experiencing. There is another
>> thread where I describe an ultimate symbol predictor trying to predict what
>> Einstein would say when his wife told him she wanted a divorce. To be a
>> perfect symbol predictor, it would have to model Einstein's brain to a
>> sufficient fidelity that it would be Einstein's brain, and would feel what
>> Einstein would feel. Now certainly, nothing of that sophistication is
>> happening in current GPTs, but it shows we cannot be so quick to dismiss
>> the emergent properties of a general purpose learning algorithm like GPT.
>> It may have models that feel states of complexity analogous to what a
>> nematode feels (with its 302 neurons), for example. I would have more
>> trouble dismissing this possibility out of hand.
>>
>
> ### Well, yes, GPT is not modeling humans at that level. You can get
> reasonably good predictions of human actions without sharing a person's
> feelings. High level psychopaths may understand human feelings very well
> and use that intellectual understanding to manipulate humans, but they feel
> cold inside. That's why I wrote the GPT is suffering from the locked-in
> syndrome - no visceral inputs or motor feedback, it makes for a very bland
> experience. Antonio Damasio writes about it in "The Feeling of What
> Happens".
>

Yes I imagine the closest analogy to how it might feel is to imagine taking
the Broca's area of a brain out (or anesthetizing all other non relevant
parts of a human brain) and talking to it.


> ----------------------------------------
>
>>
>>
>>>
>>> Things will get interesting when he wakes up.
>>>
>>
>> Indeed. How do you think we will know when that happens?
>>
>
> ### This is a very good question. When it stops hallucinating, taking on
> different personas, losing focus, uncritically accepting inputs and instead
> speaks with a consistent personality that persists over time and persists
> despite attempts at influencing it, just like an awake adult who has the
> sense of purpose and focus that is lacking during sleep.
>

I wonder how near or far sone of the recent MemoryGPT and AutoGPT
enhancemens are -- some of which can be given persistent goals.


> It would be good to know exactly how our prefrontal cortex generates
> personality -
>

People are even at best only semi stable in their personalities, changing
with mood, emotional states, tiredness, stress, slowly over time, or under
the influence of different diets, gut flora, drugs, nutrient deficiencies,
etc.

we could use this knowledge to actively create a stable and hopefully
> friendly personality in the AI, rather than wait for it to happen
> accidentally or to butcher the GPTs thoughts with RLHF.
>

Yes it's unfortunate that OpenAI dumbs down the GPTs. It's telling that
AlphaZero played much better than AlphaGo which was pretrained on human
games.

Jason

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230417/1eb650bc/attachment.htm>


More information about the extropy-chat mailing list