[ExI] My guesses about GPTs consciousness
sen.otaku at gmail.com
Sun Apr 16 12:57:38 UTC 2023
> Patients with certain brain lesions who claim to be unaware (not
consciously aware of) a visual target and yet physically react to the
target when present
One thing of particular interest in this regard is the idea of distributed
cognition. For example, when we touch a hot stove, the signal does not
travel all the way to our brain and back before we take our hand off. We
remove it much more quickly than that.
As we've talked about before on the list, there are glial cells in your
guts that do a limited amount of computation.
In these case of these lesions, there must be some primitive computation
that happens either before conscious registration, or that the lesion
allows normal visual cognition, but impairs conscious communication.
In the sense, like Haidt talks about, that there are a lot of our beliefs
("us" meaning non-philosophers, not extropians) that are arrived at
socially and not via directed and intense cognition, such as what things
are "icky". Our justications are post-hoc. The reasoning happened in a way
that is inaccessibile to reason.
Another example -- when you look at brain scans of people trying to
consciously decide something, like which button to "randomly" press, you
see the activation for their choice before they report consciously
choosing. Does this mean that they didn't choose? No, it simply means the
part of them that chose is not accessible to reason, at least in my
opinion. The same way your brain "works on" things in the background below
the level of your conscious awareness, or anxieties surface in dreams.
On Sun, Apr 16, 2023 at 1:25 AM Rafal Smigrodzki via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Sun, Apr 9, 2023 at 12:16 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> Smart doorbell systems able to detect the presence of a person in
>> proximity to a door and alter behavior accordingly have some primitive
>> sensory capacity. One cannot sense without consciousness.
> ### I am not so sure about that. Are you familiar with the phenomenon of
> blindsight? Patients with certain brain lesions who claim to be unaware
> (not consciously aware of) a visual target and yet physically react to the
> target when present?
> This is one of the reasons why I do not subscribe to e.g. panpsychism and
> do not believe all behaving animals have consciousness. There is a whole
> lot of complicated information processing that can guide goal-oriented
> behavior that can happen without conscious experience. Consciousness that
> we experience is something that requires a lot of neural hardware that is
> absent or much different in other animals, and when this hardware is
> disturbed in us, it distorts or eliminates consciousness, in part or
> GPT has a lot of intelligence and I think it does have a sort of
> consciousness but I am guessing it is completely different from an awake
> human. Here are some of the reasons why I think so:
> 1) Almost all of the cognitive heavy lifting that leads to GTP's answers
> takes place during training. The billions of parameters that determine
> GTP-4 intelligence were set in silicon last year.
> Our interactions with it use the pre-trained structure as sort of a
> look-up table.
> 2) Human consciousness involves continuous information transfer in a loop
> between the global workspace structures in the prefrontal cortex and the
> distributed knowledge throughout specialized cortical areas. GPT doesn't
> seem to have anything of this kind (but my knowledge of its structure is
> hazy, so maybe I am wrong). If GPT is conscious, it's more like being in a
> delirium, flashing in and out of focus rather than having a continuous
> stream of consciousness.
> 3) GPT does not have proprioceptive and visceral sensory input, does not
> have drives controlled by body states (hunger, thirst, lust). It has
> cognitive models of such drives, just as we can imagine, but not
> experience, the interests of other animals. So GPT could fake the verbal
> output of a human responding to instinctive drives but it does not
> experience them.
> 4) I do not know what structures arose in the GPT4 to be able to process
> sensory (e.g. visual) information. If they are different from the human
> sensory cortex, the corresponding qualia might be also completely different
> from human.
> My guess is that GTP's consciousness is like a lobotomized genius human
> polymath storyteller who is kept sedated with ketamine and suffers from
> locked-in syndrome, and is barely hanging on but still smart enough to
> impress us chumps.
> Things will get interesting when he wakes up.
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat