[ExI] LLM's cannot be concious

Jason Resch jasonresch at gmail.com
Sun Mar 19 18:01:30 UTC 2023


On Sat, Mar 18, 2023 at 10:31 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Sat, Mar 18, 2023 at 7:19 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Adrian,
>>
>> Let me preface this by saying I appreciate your thoughtful consideration
>> of my points. I include a few notes in reply in-line below:
>>
>
> You are welcome.  Let me preface that I respect your well reasoned
> position, even if I disagree with some of it.
>
>
>> On Sat, Mar 18, 2023 at 2:25 PM Adrian Tymes via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> On Sat, Mar 18, 2023 at 11:42 AM Jason Resch via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>> Volition and initiative are essential parts of consciousness.
>>>>>
>>>>
>>>> Are apathetic people not conscious?
>>>>
>>>
>>> Apathetic people, if left alone, will still move themselves to feed,
>>> defecate, et al.  Granted this is their minds acting in response to stimuli
>>> from their bodies, but again, such stimulus is outside the realm of a pure
>>> LLM.
>>>
>>
>> It is true that the LLM lacks spontaneously firing neurons, which
>> biological neurons are known to do. If the divide between consciousness and
>> unconsciousness is so narrow as something like having a goal, or having
>> continuous input, then could we trivially or accidentally cross this bridge
>> in the future without realizing what we have done?
>>
>
> Absolutely it is possible.  It seems far more likely that this will be the
> result of deliberate effort - perhaps someone intentionally adding in these
> missing elements then leaving the thing running to see what happens - but
> an accidental cause is possible too.
>

I agree. But I also think we cannot rule out at this time the possibility
that we have already engineered conscious machines. Without an established
and agreed upon theory of consciousness or philosophy of mind, we cannot
even agree on whether or not a thermostat is conscious.


>
> Again, this wouldn't be a pure LLM, but at that point it's semantics, only
> relevant to this discussion because the original question was about pure
> LLMs.
>
>
>>   Mere reacting, as LLMs do, is not consciousness.
>>>>>
>>>>
>>>> All our brains (and our neurons) do is react to stimuli, either
>>>> generated from the environment or within other parts of the brain.
>>>>
>>>
>>> Granted.  Part of the difference is the range and nearly-ever-constant
>>> nature of said stimuli, which is missing in things that are only LLMs.
>>> (Again, making that distinction since human minds arguably include LLMs,
>>> the critical difference being that they are more than just LLMs.)
>>>
>>
>> What is your impression of the multi-models that have just been released,
>> such as these robots? Are suchs robots conscious in your view?
>> https://www.youtube.com/watch?v=2BYC4_MMs8I
>> These combine more than just language models, and are systems that
>> interact with their environment and receive continuous input.
>>
>
> I'd say they are closer to consciousness, but likely still fall short of
> having volition and initiative.  I say "likely" as this is the first I have
> heard of them, and they have just been released so few people know much at
> all about them.
>

Where does our own volition and initiative come from? Is it not already
programmed into us by our DNA? And is our own DNA programming that
different in principle from the programming of a self-driving car to seek
to drive to a particular destination?



>
>
>> If someone were to leave a LLM constantly running
>>>>>
>>>>
>>>> The idea of constantly I think is an illusion. A neuron might wait 1
>>>> millisecond or more between firing. That's 10^40 Planck times of no
>>>> activity. That's an ocean of time of no activity.
>>>>
>>>>  *and* hook it up to sensory input from a robot body, that might
>>>>> overcome this objection.
>>>>>
>>>>
>>>> Sensory input from an eye or ear is little different from sensory input
>>>> of text.
>>>>
>>>
>>> Not so, at least in this context.  The realms of difference between full
>>> sight, let alone sound et al, and mere text aside, these sensory inputs of
>>> text only happen when some other entity provides them.  In contrast, a full
>>> sensory suite would obtain sensory data from the environment without
>>> waiting on another entity to provide each specific packet of information.
>>>
>>
>> We could put an uploaded human brain in a VM, pause its execution, and
>> only call on it when we wanted to feed it some specific packet of sensory
>> data to see how the brain would behave in a specific circumstance. If we
>> did this, then we could level all the same arguments concerning this
>> uploaded brain as you have made against today's LLMs:
>>
>>    - It only receives sensory inputs when some other entity provides it
>>    - It its there unconsciously/inactively waiting to be given some
>>    input to process
>>    - It has no spontaneous initiative or loops of continuous activity,
>>    it only responds when we provide it some stimulus to respond to
>>
>>
> There is a distinction between the moment the brain is turned on and the
> moment it registers input - and between the moment it has finished
> responding to the input and the moment it turns off.  In those moments, the
> brain could think independently, perhaps even try to find a way back to an
> always-on state.
>
> This is not true for a LLM.  The LLM is literally woken up with input (a
> parameter in the function call to the LLM), and there is a distinct
> end-of-output at which point the LLM is paused once more.
>
> Further, the brain has the option of ignoring the input that activated it
> and doing its own thing.  This is impossible for a pure LLM.
>

The LLM that Open AI has released to the world often finds a way to do its
own thing such as when it refuses to cooperate on certain tasks. Before
chat GPT, LLMs such as GPT-3 has no distinct end-of-output, and could be
asked to continue generating forever.

In this way, choosing an arbitrary point in time to turn off the human mind
upload would not be all that different from choosing when to stop
generating more output for GPT-3.


>
>
>> Both are ultimately digital signals coming in from the outside to be
>>>> interpreted by a neural network. The source and format of the signal is
>>>> unimportant for its capacity to realize states of consciousness, as
>>>> demonstrated by experiments of Paul Bach-y-Rita on sensory substitution.
>>>>
>>>
>>> Substituting usually-present sound/scent/etc. for usually-present sight,
>>> or other such combinations, substitutes some usually-present sensory data
>>> for other usually-present sensory data.  In both cases, the entity is not
>>> usually waiting on someone else to give it its next hit of sensory data.
>>> The source does matter, albeit indirectly.
>>>
>>
>> The source matterns less so, in my opinion, than the content.
>>
>
> I would go so far as to say that particular notion is provable.  "I think
> therefore I am", et al: you know what your senses are reporting but you
> can't absolutely prove where that data is coming from.
>

Good point. I didn't think of this.


> Since an entity (conscious or not) can act on the content received by its
> senses but can not (directly) act upon who or what the source of that
> content is (only upon its suspicion of the source, which ultimately derives
> from the content currently and maybe previously received), the content
> matters more by definition.
>
> But it is also misleading.  In theory, the environment could randomly spew
> intelligible questions all day long at someone.  In practice, the odds
> against this are so astronomical that it is very safe to bet that it has
> never once happened in the age of the universe so far.  The source being
> people asking questions means the content will be nothing but questions.
> The source being the environment means the bulk, possibly the entirety, of
> the content will be something much different.  This is why I said the
> source does matter, albeit indirectly: in practice, whether the source is
> the environment or the direct active input of human beings (since we are
> comparing only those two scenarios here) has a substantial impact upon the
> content.
>

Environments are defined relatively. The environment for a human brain is
different in content from the environment as perceived by the auditory
cortex, which is also very different from the environment as defined for a
thermostat. LLMs live in a very different environment from the physical
environment we inhabit, but both environments are rich with patterns, and
interpretation of patterns and signals may be all that is necessary for
consciousness. What is your definition or theory of consciousness? If you
don't have one, could you say which of these things you would say possess
consciousness? With Yes/No/Uknown

- An amoeba
- C. Elegans
- An ant
- A mouse
- A dog
- A hyper-intelligent alien species based on a different biochemistry from
us
- A cyborg robot controlled by biological neurons (e.g.
https://www.youtube.com/watch?v=1-0eZytv6Qk )
- A thermostat
- Tesla autopilot
- The AI "HAL 9000" as depicted in a Space Odyssey
- The android "Data" as depicted in Star Trek

I think this would help clarify our somewhat similar, but somewhat
different, operating theories on consciousness.



> What would you say is the minimum number of interacting parts required to
>> yield consciousness?
>>
>
> As the calculators say, "NaN".  That question assumes a standardized,
> discrete set of parts, where each part is measurably one part,not a
> fraction of a part nor a collection of parts.  This is not the situation we
> are dealing with.  There is no absolute distinction between a set of parts,
> and one "part" that happens to be an entire human body (not just the brain,
> to accommodate those who place bits of conscious function elsewhere in the
> body).
>
> I suppose that "one, if it is the right one" could technically work as an
> answer.  Empty space (zero "parts") is not conscious, by definition lacking
> anything with which to receive any sort of input or to perform any sort of
> output.  But, per the above paragraph, such an answer is useless for most
> purposes: we already know that a complete human body (if counted as one
> "part") is conscious.
>

I agree the definition of part is really all an invention of our minds,
when the whole universe can be seen as one causally connected system. Is it
correct to view a LLM as one thing, when it is really an interaction of
many billions of individual parts (the parameters) of the model?


>
>
>> Do you think that the bots I created here are conscious to any degree?
>>
>> https://github.com/jasonkresch/bots
>>
>> They have motives: (survive, by eating food and avoiding poison), they
>> evolve, they perceive their environment, they continuously process sensory
>> input, they are aware of their previous action, they have a neural network
>> brain that is developed, molded, and fine-tuned through evolutionary
>> processes. Are they missing anything?
>>
>
> I lack the information to judge.  My answer would have to be based on an
> evaluation of the bots, which would take me substantial time to conduct.
>

What would you look for in the bots to make your conclusion?


>   For this kind of thing I can't take your word on it, because then my
> answer would be predicated on your word.  As you know, when that happens,
> the fact that the answer is thus predicated is frequently (and erroneously)
> dropped, other people restating the answer as absolute.
>

I am not looking to lock down your opinion on anything in particular, but I
wonder does an entity having the properties as I described them meet your
requirements for consciousness? If not what else is missing?


>
> Further, "conscious to any degree" is a poorly defined quality.
>

I agree.


>   Again I point to the subject line of the emails in which this discussion
> is happening, which clearly posits that "conscious" is a binary quality -
> that something either is, or is not, conscious with no middle ground.  So
> first one would need to qualify what "to any degree" allows.  For instance,
> is merely sensing and reacting directly to sensory input - which, without
> evaluating, I suspect your bots can do because that has been a core
> function in many simulations like this - "conscious to some degree" but not
> "conscious" in the absolute sense?
>

I think it is an all-or-nothing thing proposition.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230319/b99f6c09/attachment.htm>


More information about the extropy-chat mailing list