[ExI] Language models are like mirrors

Giovanni Santostasi gsantostasi at gmail.com
Fri Mar 31 20:13:55 UTC 2023


Gordon,
Your analogy of the mirror is both right (but not in the way you intended)
and also extremely wrong and unfair to this group.
You are talking to a group of people that are among some of the best
experts in the relevant fields of this discussion: neuroscience, computer
science, physics, biology and* almost everybody disagrees with you. *

You are a smart person but you don't have the background and knowledge
these people have in these areas. I'm pretty sure you don't know how to
code for example or if you do it is very limited experience.
But you stated that many of us look at the mirror and as cats we think
there is another cat.
*That basically is an insult because the reason the cat doesn't recognize
itself is because notwithstanding our affection to this species they are
relatively dumb and they don't pass this cognitive test (basically just
some birds and primates do). *

So you are insulting not just me but most of the people in this chat list
given our position that there is something there.

We gave you many arguments as to why we are impressed with these NLM. Jason
was very patient, for example, and gave you a point by point breakdown of
why the linguist you mentioned is wrong in her analysis (most linguists
have no clue how these NLM work because they don't have the technical
background or don't read the relevant literature, probably most of them do
not code either).

Others and I have also pointed out to you repeatedly how the system
demonstrates emergent properties that cannot be predicted by the basic
elements that make up these NLMs. I have referenced the papers on the
emergence of the theory of mind and the paper that discusses the structural
and functional analogy between current NLMs and the brain anatomy of the
language processing areas of the brain (that paper explains both why NLM
are so good in processing language but also why they are limited in other
cognitive areas). Others have pointed out the exhaustive paper where
several cognitive tests were applied to the different generations of GPT
and concluded the latest one shows signs of AGI.

You instead have used trite arguments from preconceived ideas (mostly based
on some religious view about intelligence and consciousness) that are not
adequate in this case because nobody knows how these system really work (we
do know their general architecture that you or the linguist you quoted seem
not to be aware of) but not how exactly this architecture is used to
produce the outputs we observe. But we recognize the amazing capabilities
of these NLMs and how they can do so much with so little and I in
particular see how it is now a matter of quantity and not quality to reach
true AGI. We are about 2 orders of magnitude away in terms of the number of
parameters similar to what a human brain has and that can be achieved in
only 2 years given the pace of evolution of these systems.
Nobody is saying these systems are already conscious but that they
demonstrate signs that are what you would expect in a true and fully
conscious AGI system. We are not far, that is the basic claim.

The part where your analogy is right (even if that was not your intended
scope) is that indeed there is a self reflection in the mirror that these
NLMs actually are. They allow us to reflect on our minds and what
intelligence, meaning, consciousness may be. We have looked for a long time
for intellectual companionship beyond our species (this why the fascination
with aliens for example) and finally we are close to creating an artificial
mind that is similar but not identical to ours.

Also another way to see what is happening is that the AI is not separated
from us but an integral part of us. The intelligent system is not just the
isolated AI but the AI interacting and working with us. It is in a sense an
extension of our minds. I see this already happening when I interact with
GPT-4. It is a dialogue, I ask a question or explore a topic and many
different ideas are presented and I elaborate on them and GPT-4 gives me
different angles or information I was not aware of before. It is much
better interaction and discussion I may have with other humans and it is
not bound by worries about taking time from the other person or staying on
a particular topic of interest and so on. It is like discussing with a
different part of myself that has access to a lot of information I may not
have access to. Yes, it is a mirror, it is me reflecting on me but in a
completely different way you meant. But you know every intelligence and
consciousness is a mirror of other minds, a lover is a mirror of ourselves
too for example.
But your intended use, that was basically saying we are all stupid because
we don't see ourselves in the mirror and instead we think there is a ghost
in the machine it is utter bs.
Giovanni




On Fri, Mar 31, 2023 at 7:40 AM Gadersd via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> > There’s nobody there, Gio. It’s just software
>
> There’s nobody there. It’s just particles.
>
> > On Mar 31, 2023, at 6:47 AM, Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >
> >
> > People like my fried Giovanni are looking in the mirror and think they
> see another cat.
> >
> > There’s nobody there, Gio. It’s just software.
> >
> > -gts
> > _______________________________________________
> > extropy-chat mailing list
> > extropy-chat at lists.extropy.org
> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230331/ac0d0b73/attachment.htm>


More information about the extropy-chat mailing list