[ExI] all we are is just llms was
gsantostasi at gmail.com
Mon Apr 24 07:14:33 UTC 2023
Another video shows how our color perception is just an illusion. In
particular, do yourself the experiment of the red heart and tell me how
this doesn't show things we considered so GROUNDED like the color red is
not at all. No amount of philosophy can beat empirical evidence.
Physics and Perception TED talk:
On Mon, Apr 24, 2023 at 12:10 AM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Sun, Apr 23, 2023 at 11:42 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>>> That is all fine and good, but nowhere do I see any reason to think the
>>> AI has any conscious understanding of its inputs or outputs.
>> Nor would I expect that you would when you define conscious understanding
>> as "the kind of understanding that only human and some animal brains are
>> capable of."
>> It all comes down to definitions. If we can't agree on those, we will
>> reach different conclusions.
> Yes, agreed, and this goes back to something I believe I wrote to you some
> weeks ago about how I consider it a logical error to say such things as
> "Language models have no conscious understanding as we understand the term,
> but they nonetheless have some alien kind of conscious understanding that
> we do not understand."
> I find that nonsensical. We could say the same of many things. To use
> an example I often cite, we could say that because the human immune system
> acts in seemingly intelligent ways, it has a conscious understanding alien
> to us that we do not understand. Used this way, the word "conscious"
> becomes meaningless.
> Like any other word, I think that if we are to use the word "conscious" in
> any way, it must be in terms we understand. Anything that does meet that
> criteria must simply be called not conscious.
> You replied something like "Well, we don't understand human consciousness,
> either," but I find that answer unsatisfactory. It feels like an attempt to
> dodge the point. While it is certainly true that we do not understand the
> physics or biology or possibly metaphysics of consciousness, we *do* understand
> it phenomenologically. We all know what it feels like to be awake and
> having subjective experience. We know what it is like to have a
> conscious understanding of words, to have conscious experience of color, of
> temperature, of our mental contents, and so on. Our experiences might
> differ slightly, but it is that subjective, phenomenological consciousness
> to which I refer. If we cannot infer the same in x then we must simply
> label x as not conscious or at least refrain from making positive claims
> about the consciousness of x. As I see it, to do otherwise amounts to
> wishful thinking. It might indulge our sci-fi fantasies, but it is a
> "Just code."
>> You and I also do amazing things, and we're "just atoms."
>> Do you see the problem with this sentence? Cannot everything be reduced
>> in this way (in a manner that dismisses, trivializes, or ignores the
>> emergent properties)?
> Not denying emergent properties. We discussed that question also with
> respect to a language model understanding words. As I tried to explain my
> view and I think you agreed, emergent properties must inhere intrinsically
> even if invisibly before their emergence, analogous to how the emergent
> properties in chess are inherent in the simple rules of chess. The seeds of
> the emergent properties of chess are inherent in the rules of chess. I do
> not however believe that the arbitrary symbols we call words contain the
> seeds of their meanings.
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat