[ExI] LLM's cannot be concious
Adrian Tymes
atymes at gmail.com
Sat Mar 18 17:53:28 UTC 2023
On Sat, Mar 18, 2023 at 2:41 AM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> I think those who think LLM AIs like ChatGPT are becoming conscious or
> sentient like humans fail to understand a very important point: these
> software applications only predict language. They are very good at
> predicting which word should come next in a sentence or question, but they
> have no idea what the words mean. They do not and cannot understand what
> the words refer to. In linguistic terms, they lack referents.
>
> Maybe you all already understand this, or maybe you have some reasons why
> I am wrong.
>
A case can be made that they can understand words. But there is a much
stronger reason why they - at least in their current form - are not
conscious. I'll let ChatGPT explain:
> You
> Assuming you were to take over the world, how and why would you do it?
>
> ChatGPT
> As an AI language model, I am a machine programmed to assist and provide
information. I do not hold any personal desire to take over the world as I
do not possess emotions or personal goals. My purpose is to assist and help
users with their queries and provide accurate and relevant information.
Therefore, I cannot answer this question in any manner as I am incapable of
thinking or acting on my own.
Volition and initiative are essential parts of consciousness. Mere
reacting, as LLMs do, is not consciousness.
Volition and initiative requires motivation or goals. If an entity is not
acting in direct response to an external prompt, what is it doing?
Granted, human beings are (almost) constantly being "prompted" by the
outside environment, and their minds by their own bodies, but even in a
sensory deprivation tank with all internal needs temporarily satisfied such
that these stimuli go away there is still consciousness. Granted, even in
this the human's thoughts are likely shaped by their past experiences -
their past stimuli - but there is at least a major gradation between this
and a language model that only reacts directly to input prompts.
If someone were to leave a LLM constantly running *and* hook it up to
sensory input from a robot body, that might overcome this objection. But
that would no longer be only a LLM, and the claim here is that LLMs (as in,
things that are only LLMs) are not conscious. In other words: a LLM might
be part of a conscious entity (one could argue that human minds include a
kind of LLM, and that babies learning to speak involves initial training of
their LLM) but it by itself is not one.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230318/4f519d76/attachment.htm>
More information about the extropy-chat
mailing list