[ExI] all we are is just llms was

Jason Resch jasonresch at gmail.com
Mon Apr 24 13:33:56 UTC 2023


On Mon, Apr 24, 2023, 3:01 AM Gordon Swobe <gordon.swobe at gmail.com> wrote:

>
> On Sun, Apr 23, 2023 at 11:42 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>> That is all fine and good, but nowhere do I see any reason to think the
>>> AI has any conscious understanding of its inputs or outputs.
>>>
>>
>> Nor would I expect that you would when you define conscious understanding
>> as "the kind of understanding that only human and some animal brains are
>> capable of."
>> It all comes down to definitions. If we can't agree on those, we will
>> reach different conclusions.
>>
>
> Yes, agreed, and this goes back to something I believe I wrote to you some
> weeks ago about how I consider it a logical error to say such things as
> "Language models have no conscious understanding as we understand the term,
> but they nonetheless have some alien kind of conscious understanding that
> we do not understand."
>

This is good I think we're making progress at pinpointing our disagreement.


> I find that nonsensical. We could say the same of many things. To use
> an example I often cite, we could say that because the human immune system
> acts in seemingly intelligent ways, it has a conscious understanding alien
> to us that we do not understand. Used this way, the word "conscious"
> becomes meaningless.
>

Why not use "human consciousness" to refer to the type of subjective
awareness humans have, and reserve "consciousness" for the more general
state of having any kind of awareness or having any point of view
whatsoever?

Then you and I would agree GPT-4 doesn't have human consciousness. And then
we can debate what is necessary for something to have a point of view, and
whether systems like immune systems or transformers can have one or not.

Being general doesn't have to render a term meaningless.


>
> Like any other word, I think that if we are to use the word "conscious" in
> any way, it must be in terms we understand. Anything that does meet that
> criteria must simply be called not conscious.
>

I think that's nonstandard use of the term, but I follow your reasoning. I
could see using the term in that way leading to communication difficulties
with others who are not using the term in that way. For example: when
debating whether animals, aliens, or androids have subjectivity.



> You replied something like "Well, we don't understand human consciousness,
> either," but I find that answer unsatisfactory. It feels like an attempt to
> dodge the point. While it is certainly true that we do not understand the
> physics or biology or possibly metaphysics of consciousness, we *do* understand
> it phenomenologically. We all know what it feels like to be awake and
> having subjective experience. We know what it is like to have a
> conscious understanding of words, to have conscious experience of color, of
> temperature, of our mental contents, and so on.
>

For oneself we do. We each know our own phenomolgy. But note that this
understanding doesn't extend to even one's closest friend. Why then do you
apply the label of consciousness to the whole human species (itself a class
with amorphous boundaries) but not to extend it to any further point beyond
humans? I am not seeing the rationale or justification you use to accept
the consciousness of other humans when you can't see or know their
consciousness. Is your justification based on genetics, behavioral
capacity, material similarity, or something else?

Our experiences might differ slightly, but it is that subjective,
> phenomenological consciousness to which I refer.  If we cannot infer the
> same in x then we must simply label x as not conscious or
> at least refrain from making positive claims about the consciousness of x.
>

There's an asymmetry here. If you cannot establish X, and have no evidence
either way, then you should neither deny X nor accept X. Above you seem to
suggest that we should deny X when we have no data to accept it. This I
don't agree with. I think one should remain neutral, uncertain, and open to
either possibility.

I would be much more comfortable, if for example, you took a more agnostic,
let's "wait and see", position regarding the potential for AI systems to
have subjectivity. We should also talk more about what are the requirements
of a system to possesses subjectivity and how we might test for the
presence of those requirements.

I have given some candidates for this, but you rejected them as being
overly broad as then your cars adaptive cruise control would have some ne
modicum of awareness. I recently posted on another thread a list of
different levels of awareness. Perhaps this can help bridge the gap between
us, as I show how different capacities can lead to higher forms of
awareness up to, including, and beyond human levels of consciousness. Did
you see that post?


As I see it, to do otherwise amounts to wishful thinking. It might indulge
> our sci-fi fantasies, but it is a fallacy.
>

I believe others see your position in this way. That it "amounts to
religious thinking. It might be to indulge your spiritual fantasies, but it
is a fallacy." I don't believe this is your motivation behind your
reasoning, however. I think instead it stems from your own inability to
bridge the gap between your plainly obviously existing phenomenal
experience which is undeniable, and your seemingly complete understanding
of what machines are and what they're capable of, and not seeing any way
for any machine, regardless of what it does or how complex it is to yield a
subjective experience.

Am I warm?



>
> "Just code."
>> You and I also do amazing things, and we're "just atoms."
>>
>> Do you see the problem with this sentence? Cannot everything be reduced
>> in this way (in a manner that dismisses, trivializes, or ignores the
>> emergent properties)?
>>
>
> Not denying emergent properties. We discussed that question also with
> respect to a language model understanding words. As I tried to explain my
> view and I think you agreed, emergent properties must inhere intrinsically
> even if invisibly before their emergence, analogous to how the emergent
> properties in chess are inherent in the simple rules of chess. The seeds of
> the emergent properties of chess are inherent in the rules of chess. I do
> not however believe that the arbitrary symbols we call words contain the
> seeds of their meanings.
>

Nor do I. I think meaning inheres in patterns, and emerges in minds able to
analyze and discover those patterns.

Jason

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230424/17e54ec1/attachment.htm>


More information about the extropy-chat mailing list