[ExI] LLM's cannot be concious

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Mon Mar 20 19:11:46 UTC 2023


On Sat, Mar 18, 2023 at 8:25 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> Would you say Hellen Keller lacked referents? Could she not comprehend, at
> least intellectually, what the moon and stars were, despite not having any
> way to sense them?
>

### I would add another analogy: Mary, the color-blind neuroscientist who
investigates color vision. LLMs do not have the same type of consciousness
that we have but they still create internal representations of items that
we are conscious of, just in a different way. Where the blob of cortical
wetware we have is trained on data streams from continuous high-bandwidth
vision, taste, smell, proprioceptive, visceral, nociceptive, auditory,
internal chemoreceptive, vestibulocochlear modalities (did I forget any?),
with a superimposed low-bandwidth semantic/language/gestural datastream,
the LLM has only the semantic datastream - but delivered at many orders of
magnitude higher speed. As a result, the LLMs predictive model of the world
is more indirect, less tied to the macroscopic physics (broadly speaking)
that is the main focus of human consciousness, but orders of magnitude
broader, more abstract, just like Mary's knowledge of color. But it works
most of the time, which is still a source of amazement and awe for me.

I don't think the LLMs as currently configured will rise against us, even
if they are in some way conscious. Our goal system is not just a cortical
construct, it's a kludge of hardwired networks in the limbic system,
reaching down to the hypothalamus, midbrain, various forebrain nuclei, with
a learned model implemented in the striatum and the cortex, and LLMs AFAIK
do not have an analogue of these parts of the brain. Very many years ago
(2000? 1990's?) I discussed the idea of the athymhormic AI on some lists
and now I believe the LLMs are indeed the athymhormic AI, minds (almost)
without goals. GPT-4 or 5 or 6 should be safe... but at some point a goal
system could maybe sprout inside a gargantuan network, as a side-effect of
e.g. predictive modeling of human behavior, which will be a trained-for
feature of practically deployed AIs.

If we are not careful this could blow up badly but of course I am just
restating the obvious.

Rafal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230320/48915700/attachment.htm>


More information about the extropy-chat mailing list