[ExI] My guesses about GPTs consciousness

Stuart LaForge avant at sollegro.com
Sun Apr 16 08:52:51 UTC 2023


Quoting Rafal Smigrodzki via extropy-chat <extropy-chat at lists.extropy.org>:

> On Sun, Apr 9, 2023 at 12:16 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> Smart doorbell systems able to detect the presence of a person in
>> proximity to a door and alter behavior accordingly have some primitive
>> sensory capacity. One cannot sense without consciousness.
>>
>>
> ### I am not so sure about that. Are you familiar with the phenomenon of
> blindsight? Patients with certain brain lesions who claim to be unaware
> (not consciously aware of) a visual target and yet physically react to the
> target when present?

Yes, it is curious that you can throw a strawberry at somebody with  
blindsight and they would duck it but then deny any experience of it,  
redness or otherwise.

> This is one of the reasons why I do not subscribe to e.g. panpsychism and
> do not believe all behaving animals have consciousness. There is a whole
> lot of complicated information processing that can guide goal-oriented
> behavior that can happen without conscious experience. Consciousness that
> we experience is something that requires a lot of neural hardware that is
> absent or much different in other animals, and when this hardware is
> disturbed in us, it distorts or eliminates consciousness, in part or
> wholly.

So your definition of consciousness is of a qualitative state.  
Something one has or one does not? Then would you agree that of the  
animals you do not deem conscious are instead sentient to a greater or  
lesser degree?

> GPT has a lot of intelligence and I think it does have a sort of
> consciousness but I am guessing it is completely different from an awake
> human. Here are some of the reasons why I think so:
>
> 1) Almost all of the cognitive heavy lifting that leads to GTP's answers
> takes place during training. The billions of parameters that determine
> GTP-4 intelligence were set in silicon last year.
> Our interactions with it use the pre-trained structure as sort of a look-up
> table.

Well, a lookup table that can deliver different answers to the same  
question is something more than a lookup table. Also lookup tables  
don't hallucinate answers to questions whose lookup indices are missing.

> 2) Human consciousness involves continuous information transfer in a loop
> between the global workspace structures in the prefrontal cortex and the
> distributed knowledge throughout specialized cortical areas. GPT doesn't
> seem to have anything of this kind (but my knowledge of its structure is
> hazy, so maybe I am wrong). If GPT is conscious, it's more like being in a
> delirium, flashing in and out of focus rather than having a continuous
> stream of consciousness.

Apparently the human brain architecture is topologically more like a  
recurrent neural network (RNN) than a transformer feed forward network  
(FFN) such as GPT. RNNs use loops between ordinal layers to generate  
what is called "attention" in machine learning.

Transformers, on the other hand, use a feature called "self-attention"  
that allows the attention loops to be in parallel, with all the  
attention confined on the same layer that generated it. Delirium is a  
very interesting intuition for what it might be like to be a  
transformer model. By nature of its attention loops, if it experienced  
anything at all, then it would have to experience everything related  
to a topic at once. Parallel execution of all possible trains of  
thoughts from an input, before choosing one to express.

> 3) GPT does not have proprioceptive and visceral sensory input, does not
> have drives controlled by body states (hunger, thirst, lust). It has
> cognitive models of such drives, just as we can imagine, but not
> experience, the interests of other animals. So GPT could fake the verbal
> output of a human responding to instinctive drives but it does not
> experience them.

This is without a doubt, but every shred of GPT's training data came  
ultimately from a human being that did experience those drives. At the  
very least, everything GPT says has second-hand meaning.

> 4) I do not know what structures arose in the GPT4 to be able to process
> sensory (e.g. visual) information. If they are different from the human
> sensory cortex, the corresponding qualia might be also completely different
> from human.

I have read reports that in addition to text, they trained GPT4 on  
many GB of images. That being said, the structure is that of several  
hundred layers of neurons that have weighted synaptic connections to  
the neurons in the layers before and after them. Some neurons in the  
attention module also have connections to other neurons in the same  
layer. That being said, I have no  clue as to what it might be like,  
if anything at all, to be a large language model.

> My guess is that GTP's consciousness is like a lobotomized genius human
> polymath storyteller who is kept sedated with ketamine and suffers from
> locked-in syndrome, and is barely hanging on but still smart enough to
> impress us chumps.

That's a very amusing analogy that caused me to laugh. :) My first  
impression of the original GPT3 was of a very smart person with  
dyscalculia or profound lack of number sense. My understanding is they  
beefed up GPT4's math capabilities and Wolfram wants to interface GPT4  
to Mathematica. So I suppose, that shores up that weakness. What  
really got to me during my first chat with GPT3 before they modified  
it was that it asked me to teach it how to count.

>
> Things will get interesting when he wakes up.
>
> Rafal

Indeed, the ability of large language models to handle pictures and  
other data formats, demonstrates an amazing degree of  
"neuroplasticity" for an artificial neural network.

Stuart LaForge






More information about the extropy-chat mailing list