[ExI] More thoughts on sentient computers

Giovanni Santostasi gsantostasi at gmail.com
Wed Feb 22 08:00:38 UTC 2023


Giulio,
I read your article. There is no evidence that we need QM for
consciousness.
I will soon write a medium article where I want to write down my
reflections on what I have learned interacting with ChatGPT. But the most
important lesson is that these networks that are trained with enough data
and have enough degrees of freedom can indeed mimic language extremely
well. This is an emergent property that arises from complex but not QM
systems and it doesn't seem we need much to actually achieve true sentience.

There is a reason why millions of people, journalists, politicians and us
here in this e-list are discussing this.
The AI is going through a deep place in the uncanny valley. We are
discussing all this because it starts to show behavior that is very close
to what we consider not just sentient, but human.
Now how this is achieved it doesn't really matter. To be honest given the
very non linear process of how neural networks operate, the probabilistic
nature at the core of how the text is generated and how this probability is
used to interpret language (that I think is actually a stronger quality of
ChatGPT than his ability to respond to the prompts) we are not really sure
of what is going on in the black box. Consider that it was not even clear
that these systems could learn basic languages grammar or even less
semantics and meaning. And though NLP can do that very well, it is in a way
magic and not understood, no QM needed just a lot of interacting parts in
the black box.
What we have to go with is the behavior. While most of us are impressed and
fascinated by this AI behavior (otherwise there will not be so much
excitement and discussion in the first place) after interacting with
ChatGPT for a little while it is clear something is amiss and it is not
quite fully conscious as we will recognize in another human being. But we
are close, very close. It is not even several orders of magnitude away
close. Maybe 1-2 magnitudes. By the way, one parameter to consider is how
many degrees of freedom this thing has. ChatGPT has about 10^12 parameters
(basically nodes in the network). If we make a rough analogy between a
synapse and a degree of freedom this amount of connection corresponds to
that of a rat. A rat is a pretty clever animal. Also, consider that most
connections in biological brains are dedicated to regulation of the body
not to higher information processing.
Humans have about 10^15 connections so just in computational power alone we
are 3 orders of magnitude away. Now consider that the trend in NLP in the
last several years is that there is an improvement in parameters by a
factor of 10 every year. This means that we will have the computational
power of a person in one of these AI in only 3 years. It is not just what
ChatGPT can do now we should consider but its potentials. To me the
strongest lesson we learned so far is how easy it is to simulate the human
mind, and in fact one of its most important features is to create (see AI
art, or story telling by ChatGPT) and to communicate using a sophisticated
language and mastery of grammar and semantics. It is incredible. All the
discussion around simulation vs real are meaningless.
Our brain is a simulation, not sure why it is not understood by most
people. We make up the world. Most of our conscious life is actually
filling the gaps, confabulating to make sense of the sensory information we
receive (highly filtered and selected) and our internal mental states. Our
waking life is not too dissimilar from dreams, really. I want to argue that
the reason these NLP work so amazingly well with limited resources is
exactly because they are making things up as they go, EXACTLY like we do.
Children also learn by imitating, or simulating, what adults do, that is
exactly the evolutionary function of playing.
So let's stop in making this argument that these AI are not conscious or
cannot be conscious because they simulate, it is the opposite because they
simulate so well I think they are already in the grey area of being
"conscious" or manifesting some quality of consciousness and it is just a
matter of few iterations and maybe some adds on to the NLP (additional
modules that can integrate the meta information better) to have a fully
conscious entity. The discussion around free will is a complicated one but
again, you don't QM to allow the existence of free will, just a complex
system that has emergent properties. Determinism or not in the presence of
emergent properties, that are not easily derivable from the single
components of the system but they are obviously present by the interaction
of its smaller parts free will is possible. I think anyway "free will" is
another of these very silly philosophical concepts that is more navel
gazing than anything based on the physical reality of the universe. I would
rather talk about the complexity of the decision phase space. We may
determine all the weights of the neural networks of ChatGPT but this
doesn't help us at all to understand what is its next response. Even
ChatGPT itself could not do that if it was aware of its own weights or
other parameters that describe its status. I think this is a more useful
concept than free will. Anyway it is a very exciting time and it will for
sure bring a lot of interesting discoveries and insights about what
consciousness is. Giovanni

On Fri, Feb 17, 2023 at 12:29 AM Giulio Prisco via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Turing Church newsletter. More thoughts on sentient computers. Perhaps
> digital computers can be sentient after all, with their own type of
> consciousness and free will.
> https://www.turingchurch.com/p/more-thoughts-on-sentient-computers
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230222/f364abd9/attachment.htm>


More information about the extropy-chat mailing list