[ExI] Language models are like mirrors

Jason Resch jasonresch at gmail.com
Sat Apr 1 14:18:05 UTC 2023


Succinctly and well put Ben.

To Gordon: I'm willing to entertain arguments why you think our brains are
privileged in some way that artificial neural networks are not (and can
never overcome).

Arguments from authority (appealing to what other linguists or what ChatGPT
say) hold little sway and I don't think will change any minds.

A few prominent critics that deny the possibility of computer generated
consciousness usually fallen into one of two camps:

1. Non-computable physics: what the brain does is uncomputable, there are
infinities, continuities, real numbers, true randomness, quantum weirdness,
quantum gravity, wave function collapse, hyper computation, etc. which
somehow play a fundamental and irreplaceable role in how the brain works
and no Turing machine, no matter how much memory or time it is given can
ever emulate this process. (E.g., Roger Penrose)

2. Weak-AI theorists: What the brain does is computable, but even a perfect
emulation or simulation of the brain would never be conscious. It's not the
right stuff. A simulation of lactation won't provide you any milk so why
should a simulation of a brain give you consciousness? This is sometimes
called biological naturalism. (E.g., John Searle)


>From your arguments you seem to be more aligned with camp 2, is that a fair
assessment? Do you think the brain is Turing emulable, or at least
simulable to a sufficient level of accuracy that no one could tell any
difference in it's behavior?

The problem with camp 1 is no one can show anything in physics, chemistry,
or biology that's uncomputable or explain how or why it could make a
difference or be important. Moreover we have realistic models of biological
brains and can accurately simulate small parts of them without relying on
unknown or speculative physics.

The problem with camp 2 is that it opens the door to philosophical zombies:
putative beings who in all ways act, speak, and behave exactly as if they
are conscious humans, but lacking any awareness or inner life. This sounds
fine at first but when you dig into the concept it leads to absurdities:

Imagine a whole Earth populated by such beings. They would still talk about
their consciousness, still discuss it in email lists, still argue whether
their AIs are conscious, they would write whole books on consciousness and
come up with arguments like dancing qualia as and neural substitution, they
would even come up with the idea of zombies and argue about their logical
possibility, all the while every one of them denying that they are zombies.
No, on the contrary, each of them claims to have a rich inner life, filled
with joys, sorrows, pains, beautiful sunsets, and favorite foods and
colors, despite that none of them actually see, taste or feel anything.
They can speak at length of their own sensations of pain and how it makes
them feel. From where does this information come? Some of these zombies
even choose euthanasia over a life of pain (which none of them really
feel). What drives them to do that when these zombies experience no pain?
Why do these zombies still claim to be conscious? When we analyze their
brains we see they aren't using the circuits involved with lying, they
actually "believe" they are conscious, (if zombies are such things that you
will allow to have beliefs.)

Between zombies and machine consciousness, I have to say I find the concept
of zombies slightly more dubious. But that's just my personal opinion.

Jason

On Sat, Apr 1, 2023, 9:35 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On 01/04/2023 13:43, Gordon Swobe wrote:
>
> Unlike these virtual LLMs, we have access also to the referents in the
> world that give the words in language meaning.
>
>
>
> I don't understand why this argument keeps recurring, despite having been
> demolished more than once.
>
> Here's another take on it:
>
> The LLMs like ChatGPT only have access to symbols that associate with
> further distant sources (articles on the internet, text input from users,
> etc.).
>
> Our brains only have access to symbols that associate with further distant
> sources (sensory inputs and memories, including articles on the internet
> and text (for quite a few things, articles on the internet and text are the
> *only* sources)).
>
> The meanings of these symbols is created within the respective systems
> (computers and brains) by their associations and cross-associations with
> other symbols that have their own sources.
>
> An example: My knowledge of dinosaurs comes from words, pictures, speech,
> articles on the internet, and their interaction with other information that
> I have about the world. I've never met a dinosaur. But I have a pretty firm
> idea of what, for example, an ankylosaur would have been like. I may be
> wrong, of course, there are things that we still don't know about
> ankylosaurs. But that doesn't matter. I have a meaningful model of one in
> my head, by virtue of a symbol being linked to other symbols, that are in
> turn linked... (insert a few thousand neural links here) And none of them
> are from my direct experience of an ankylosaur.
>
> I fail to see any significant difference between my brain and an LLM, in
> these respects, except that my brain is made of water and fats and
> proteins, and an LLM isn't. And perhaps the degree of complexity and number
> of the links. Perhaps. (That's something subject to constant change, and if
> they don't already, these AI systems will soon outstrip the human brain in
> the number and complexity of links).
>
> We both do have access to the 'referents in the world', indirectly. It's
> more like the references within the systems, (that link to many other
> things) that give the words meaning.
>
> The various links to text and internet articles that an LLM has, have
> links to other things that have links to other things, that have links to
> other things, and so on, *that originate in the world*. Of course they
> do, or where else could they come from?
>
> Just as my brain has links to links, etc., that originate in the world.
>
> LLMs *do* have access to the referents that give words meaning, in much
> the same way that we do.
>
>
> Ben
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230401/d41f2e45/attachment.htm>


More information about the extropy-chat mailing list