[ExI] LLM's cannot be concious
Jason Resch
jasonresch at gmail.com
Sun Mar 19 02:17:08 UTC 2023
Adrian,
Let me preface this by saying I appreciate your thoughtful consideration of
my points. I include a few notes in reply in-line below:
On Sat, Mar 18, 2023 at 2:25 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Sat, Mar 18, 2023 at 11:42 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> Volition and initiative are essential parts of consciousness.
>>>
>>
>> Are apathetic people not conscious?
>>
>
> Apathetic people, if left alone, will still move themselves to feed,
> defecate, et al. Granted this is their minds acting in response to stimuli
> from their bodies, but again, such stimulus is outside the realm of a pure
> LLM.
>
It is true that the LLM lacks spontaneously firing neurons, which
biological neurons are known to do. If the divide between consciousness and
unconsciousness is so narrow as something like having a goal, or having
continuous input, then could we trivially or accidentally cross this bridge
in the future without realizing what we have done?
>
>
>> Mere reacting, as LLMs do, is not consciousness.
>>>
>>
>> All our brains (and our neurons) do is react to stimuli, either generated
>> from the environment or within other parts of the brain.
>>
>
> Granted. Part of the difference is the range and nearly-ever-constant
> nature of said stimuli, which is missing in things that are only LLMs.
> (Again, making that distinction since human minds arguably include LLMs,
> the critical difference being that they are more than just LLMs.)
>
What is your impression of the multi-models that have just been released,
such as these robots? Are suchs robots conscious in your view?
https://www.youtube.com/watch?v=2BYC4_MMs8I
These combine more than just language models, and are systems that interact
with their environment and receive continuous input.
>
>
>> Volition and initiative requires motivation or goals. If an entity is
>>> not acting in direct response to an external prompt, what is it doing?
>>>
>>
>> Sleeping.
>>
>
> A fair point, though a side matter from what I was talking about. When an
> entity that is conscious - which includes not being asleep at that time -
> is not acting in direct response to an external prompt, what is it doing?
>
We have many words for behaviors that involve conscious perception devoid
of action, such as meditating, perceiving, thinking, focusing, resting,
daydreaming, etc. I do not believe these AIs do such things when not
responding to a prompt. Our brains are continuously active in that we have
loops of thought, a stream of consciousness, a stream of sensory input, and
and at all times when awake (or when dreaming) is working to construct a
model of the environment.
I don't think it would take very much to "bolt on" things like a continuous
stream of internally generated prompts for a language model to have a
stream of consciousness. And as Google's Palm-E shows, we already have
models which combine and integrate various senses, see for example, this AI
which can be asked questions about the image it is presented with:
https://www.youtube.com/watch?v=g8IV8WLVI8I
>
>
>> If someone were to leave a LLM constantly running
>>>
>>
>> The idea of constantly I think is an illusion. A neuron might wait 1
>> millisecond or more between firing. That's 10^40 Planck times of no
>> activity. That's an ocean of time of no activity.
>>
>> *and* hook it up to sensory input from a robot body, that might overcome
>>> this objection.
>>>
>>
>> Sensory input from an eye or ear is little different from sensory input
>> of text.
>>
>
> Not so, at least in this context. The realms of difference between full
> sight, let alone sound et al, and mere text aside, these sensory inputs of
> text only happen when some other entity provides them. In contrast, a full
> sensory suite would obtain sensory data from the environment without
> waiting on another entity to provide each specific packet of information.
>
We could put an uploaded human brain in a VM, pause its execution, and only
call on it when we wanted to feed it some specific packet of sensory data
to see how the brain would behave in a specific circumstance. If we did
this, then we could level all the same arguments concerning this uploaded
brain as you have made against today's LLMs:
- It only receives sensory inputs when some other entity provides it
- It its there unconsciously/inactively waiting to be given some input
to process
- It has no spontaneous initiative or loops of continuous activity, it
only responds when we provide it some stimulus to respond to
>
>
>> Both are ultimately digital signals coming in from the outside to be
>> interpreted by a neural network. The source and format of the signal is
>> unimportant for its capacity to realize states of consciousness, as
>> demonstrated by experiments of Paul Bach-y-Rita on sensory substitution.
>>
>
> Substituting usually-present sound/scent/etc. for usually-present sight,
> or other such combinations, substitutes some usually-present sensory data
> for other usually-present sensory data. In both cases, the entity is not
> usually waiting on someone else to give it its next hit of sensory data.
> The source does matter, albeit indirectly.
>
The source matterns less so, in my opinion, than the content. The brain
wires itself up and creates the structures necessary to interpret the
patterns of data it is provided. For example, consider some of these
accounts:
“For instance, it is possible to induce visual experiences in both blind
and sighted people through the sensation of touch. A grid of over a
thousand stimulators driven by a television camera is placed against a
person’s back. The sensations are carried to the brain where their
processing can induce the having of visual experiences. A sighted woman
reports on her experience of prosthetic vision:
“I sat blindfolded in the chair, the TSR cones cold against my back. At
first I felt only formless waves of sensation. Collins said he was just
waving his hand in front of me so that I could get used to the feeling.
Suddenly I felt or saw, I wasn’t sure which, a black triangle in the lower
left corner of a square. The sensation was hard to pinpoint. I felt
vibrations on my back, but the triangle appeared in a square frame inside
my head.”
(Nancy Hechinger, “Seeing Without Eyes” 1981 p. 43)
-- Douglas Hofstadter and Daniel Dennett in "The Mind’s I" (1981)
Ferrets at a young age had their optic nerves connected to their auditory
cortex. The auditory cortex re-wired itself to process visual input and
they were normally sighted:
https://www.nytimes.com/2000/04/25/science/rewired-ferrets-overturn-theories-of-brain-growth.html
“We don’t see with the eyes, for example. We don’t hear with ears. All of
that goes on in the brain. Remember, if I’m looking at you, the image of
you doesn’t get beyond my retina – just the back of my eye. From there to
the brain, to the rest of the brain, it’s pulses: pulses along nerves. Well
pulses aren’t any different than the pulses from the a big toe. It’s the
way the information it carries in the frequency and the pattern of the
pulses. And so if you can train the brain to extract that kind of
information, then you don’t need the eye to see. You can have an artificial
eye.” – Paul Bach-y-Rita (2003) documentary Father of
"We don't see with our eyes," Bach-y-Rita is fond of saying. "we see with
our brains." The ears, eyes, nose, tongue, and skin are just inputs that
provide information. When the brain processes this data, we experience the
five senses, but where the data come from may not be so important.
"Clearly, there are connections to certain parts of the brain, but you can
modify that," Bach-y-Rita says. "You can do so much more with a sensory
organ than what Mother Nature does with it."
https://www.discovermagazine.com/mind/can-you-see-with-your-tongue
“At Harvard University in the late 1990s, for instance, neurologist Alvaro
Pascual-Leone performed brain scans of blind subjects. When he asked them
to read braille with their reading fingers, their visual cortex lit up.
When sighted people performed the same task, their visual cortex stayed
dormant."
So I believe regardless of the source or format of the sensory data
provided, if it is provided to a neural network of sufficient flexibility
to wire itself to meaningfully process such input, the input can be
associated with novel states of consciousness appropriate to the patterns
of the signal provided to that neural network.
>
>
>> But that would no longer be only a LLM, and the claim here is that LLMs
>>> (as in, things that are only LLMs) are not conscious. In other words: a
>>> LLM might be part of a conscious entity (one could argue that human minds
>>> include a kind of LLM, and that babies learning to speak involves initial
>>> training of their LLM) but it by itself is not one.
>>>
>>
>> I think a strong argument can be made that individual parts of our brains
>> are independently consciousness. For example, the Wada test shows each
>> hemisphere is independently consciousness. It would not surprise me if the
>> language processing part of our brains is also conscious in its own right.
>>
>
> A fair argument. My position is that not all such parts are independently
> conscious, in particular the language processing part, but that
> consciousness is a product of several parts working together. (I am not
> specifying which parts here, just that language processing by itself is
> insufficient, since the question at hand is whether a language processing
> model by itself is conscious.)
>
What would you say is the minimum number of interacting parts required to
yield consciousness?
Do you think that the bots I created here are conscious to any degree?
https://github.com/jasonkresch/bots
They have motives: (survive, by eating food and avoiding poison), they
evolve, they perceive their environment, they continuously process sensory
input, they are aware of their previous action, they have a neural network
brain that is developed, molded, and fine-tuned through evolutionary
processes. Are they missing anything?
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230318/e914f525/attachment.htm>
More information about the extropy-chat
mailing list