[ExI] LLM's cannot be concious

Adrian Tymes atymes at gmail.com
Wed Mar 22 17:42:01 UTC 2023


On Sun, Mar 19, 2023 at 11:03 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I also think we cannot rule out at this time the possibility that we have
> already engineered conscious machines. Without an established and agreed
> upon theory of consciousness or philosophy of mind, we cannot even agree on
> whether or not a thermostat is conscious.
>

A fair point.


> Where does our own volition and initiative come from? Is it not already
> programmed into us by our DNA?
>

By definition, no it is not.  Our DNA may give us the capacity for volition
and initiative, but "programmed" implies deliberate design.  Short of
speculation about God and intelligent design, our DNA was not deliberately
arranged by some other conscious entity that we can prove the existence of.


> What is your definition or theory of consciousness? If you don't have one,
> could you say which of these things you would say possess consciousness?
> With Yes/No/Uknown
>

There are degrees of consciousness - even a human being can experience
them: on a simple lazy morning where one slowly wakes up, one does not
instantly go from fully unconscious to fully conscious - so answering
Yes/No would misstate things.

https://en.wiktionary.org/wiki/conscious gives an illustrative example:
"Only highly intelligent beings can be fully conscious."  That the phrase
"fully conscious" makes sense at all means that there is a level of
consciousness that is less than full, yet is also not completely absent (as
that would merely be "unconscious", thus not justifying "fully" to
distinguish from other levels of being conscious).


> I agree the definition of part is really all an invention of our minds,
> when the whole universe can be seen as one causally connected system. Is it
> correct to view a LLM as one thing, when it is really an interaction of
> many billions of individual parts (the parameters) of the model?
>

Like most (maybe all: I haven't yet thoroughly considered exceptions)
things, a LLM can alternatively be viewed as a single thing or as a
collection of smaller things in a certain configuration.


> I lack the information to judge.  My answer would have to be based on an
>> evaluation of the bots, which would take me substantial time to conduct.
>>
>
> What would you look for in the bots to make your conclusion?
>

I would not have a firm fixed list of criteria prior to the evaluation.
Any attempt to do so would almost certainly miss important criteria, which
would only become apparent during the evaluation.

  Again I point to the subject line of the emails in which this discussion
>> is happening, which clearly posits that "conscious" is a binary quality -
>> that something either is, or is not, conscious with no middle ground.  So
>> first one would need to qualify what "to any degree" allows.  For instance,
>> is merely sensing and reacting directly to sensory input - which, without
>> evaluating, I suspect your bots can do because that has been a core
>> function in many simulations like this - "conscious to some degree" but not
>> "conscious" in the absolute sense?
>>
>
> I think it is an all-or-nothing thing proposition.
>

And that would seem to be the core of our disagreement.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230322/434c67b7/attachment.htm>


More information about the extropy-chat mailing list