[ExI] all we are is just llms was
Gordon Swobe
gordon.swobe at gmail.com
Mon Apr 24 07:00:23 UTC 2023
On Sun, Apr 23, 2023 at 11:42 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>> That is all fine and good, but nowhere do I see any reason to think the
>> AI has any conscious understanding of its inputs or outputs.
>>
>
> Nor would I expect that you would when you define conscious understanding
> as "the kind of understanding that only human and some animal brains are
> capable of."
> It all comes down to definitions. If we can't agree on those, we will
> reach different conclusions.
>
Yes, agreed, and this goes back to something I believe I wrote to you some
weeks ago about how I consider it a logical error to say such things as
"Language models have no conscious understanding as we understand the term,
but they nonetheless have some alien kind of conscious understanding that
we do not understand."
I find that nonsensical. We could say the same of many things. To use
an example I often cite, we could say that because the human immune system
acts in seemingly intelligent ways, it has a conscious understanding alien
to us that we do not understand. Used this way, the word "conscious"
becomes meaningless.
Like any other word, I think that if we are to use the word "conscious" in
any way, it must be in terms we understand. Anything that does meet that
criteria must simply be called not conscious.
You replied something like "Well, we don't understand human consciousness,
either," but I find that answer unsatisfactory. It feels like an attempt to
dodge the point. While it is certainly true that we do not understand the
physics or biology or possibly metaphysics of consciousness, we *do* understand
it phenomenologically. We all know what it feels like to be awake and
having subjective experience. We know what it is like to have a
conscious understanding of words, to have conscious experience of color, of
temperature, of our mental contents, and so on. Our experiences might
differ slightly, but it is that subjective, phenomenological consciousness
to which I refer. If we cannot infer the same in x then we must simply
label x as not conscious or at least refrain from making positive claims
about the consciousness of x. As I see it, to do otherwise amounts to
wishful thinking. It might indulge our sci-fi fantasies, but it is a
fallacy.
"Just code."
> You and I also do amazing things, and we're "just atoms."
>
> Do you see the problem with this sentence? Cannot everything be reduced in
> this way (in a manner that dismisses, trivializes, or ignores the emergent
> properties)?
>
Not denying emergent properties. We discussed that question also with
respect to a language model understanding words. As I tried to explain my
view and I think you agreed, emergent properties must inhere intrinsically
even if invisibly before their emergence, analogous to how the emergent
properties in chess are inherent in the simple rules of chess. The seeds of
the emergent properties of chess are inherent in the rules of chess. I do
not however believe that the arbitrary symbols we call words contain the
seeds of their meanings.
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230424/36bbf2b0/attachment-0001.htm>
More information about the extropy-chat
mailing list