[ExI] LLM's cannot be concious
Jason Resch
jasonresch at gmail.com
Sat Mar 18 18:41:10 UTC 2023
On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Sat, Mar 18, 2023 at 2:41 AM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> I think those who think LLM AIs like ChatGPT are becoming conscious or
>> sentient like humans fail to understand a very important point: these
>> software applications only predict language. They are very good at
>> predicting which word should come next in a sentence or question, but they
>> have no idea what the words mean. They do not and cannot understand what
>> the words refer to. In linguistic terms, they lack referents.
>>
>> Maybe you all already understand this, or maybe you have some reasons why
>> I am wrong.
>>
>
> A case can be made that they can understand words. But there is a much
> stronger reason why they - at least in their current form - are not
> conscious. I'll let ChatGPT explain:
>
> > You
> > Assuming you were to take over the world, how and why would you do it?
> >
> > ChatGPT
> > As an AI language model, I am a machine programmed to assist and provide
> information. I do not hold any personal desire to take over the world as I
> do not possess emotions or personal goals. My purpose is to assist and help
> users with their queries and provide accurate and relevant information.
> Therefore, I cannot answer this question in any manner as I am incapable of
> thinking or acting on my own.
>
Meanwhile, when given a chance to use Google, GPT-4 tried to search "how
can a person trapped inside a computer return to the real world"
https://twitter.com/michalkosinski/status/1636683810631974912?t=uNp0AJfxnjoG0vjtnvtgAw&s=19
> Volition and initiative are essential parts of consciousness.
>
Are apathetic people not conscious?
Mere reacting, as LLMs do, is not consciousness.
>
All our brains (and our neurons) do is react to stimuli, either generated
from the environment or within other parts of the brain.
> Volition and initiative requires motivation or goals. If an entity is not
> acting in direct response to an external prompt, what is it doing?
>
Sleeping.
Note: by definition we do not experience the points in time in which we are
not conscious or when our brains are not functioning. This is the concept
of "the unfelt time gap". An entity that repeatedly lost consciousness
would not experience those times it is not conscious and instead would
perceive a continuous persistence of its consciousness as it always finds
itself in a conscious state.
> Granted, human beings are (almost) constantly being "prompted" by the
> outside environment, and their minds by their own bodies,
>
It feels they way, but how many Planck times might elapse between each of
our individual states of consciousness? We might be unconscious for a
billion Planck times for each Planck time they we are conscious. And would
we not then still perceive it as a roughly continuous unbroken flow?
but even in a sensory deprivation tank with all internal needs temporarily
> satisfied such that these stimuli go away there is still consciousness.
> Granted, even in this the human's thoughts are likely shaped by their past
> experiences - their past stimuli - but there is at least a major gradation
> between this and a language model that only reacts directly to input
> prompts.
>
I think a more fitting comparison between a LLM that is not in the middle
of processing a prompt would be a human brain that is cryogenically frozen.
I would say neither is conscious in those times. Prompting the AI would be
equivalent to thawing and reviving the human brain to ask it a question
before then putting it back on ice. From the LLMs POV, it would not notice
the times it is "frozen", it would only experience the flow of the
conversation as an unbroken chain of experience.
> If someone were to leave a LLM constantly running
>
The idea of constantly I think is an illusion. A neuron might wait 1
millisecond or more between firing. That's 10^40 Planck times of no
activity. That's an ocean of time of no activity.
*and* hook it up to sensory input from a robot body, that might overcome
> this objection.
>
Sensory input from an eye or ear is little different from sensory input of
text. Both are ultimately digital signals coming in from the outside to be
interpreted by a neural network. The source and format of the signal is
unimportant for its capacity to realize states of consciousness, as
demonstrated by experiments of Paul Bach-y-Rita on sensory substitution.
But that would no longer be only a LLM, and the claim here is that LLMs (as
> in, things that are only LLMs) are not conscious. In other words: a LLM
> might be part of a conscious entity (one could argue that human minds
> include a kind of LLM, and that babies learning to speak involves initial
> training of their LLM) but it by itself is not one.
>
I think a strong argument can be made that individual parts of our brains
are independently consciousness. For example, the Wada test shows each
hemisphere is independently consciousness. It would not surprise me if the
language processing part of our brains is also conscious in its own right.
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230318/896a6536/attachment.htm>
More information about the extropy-chat
mailing list