[ExI] LLM's cannot be concious
Giovanni Santostasi
gsantostasi at gmail.com
Tue Mar 21 01:50:58 UTC 2023
It is the exact opposite of what Gordon says, actually.
NLPs have demonstrated an amazing capability of generating meaning from
statistical properties and demonstrated the power of neural networks for
pattern recognition.
Several years ago AI experts were skeptical that NLP could derive the laws
of grammar from these patterns but not only did they achieve exactly that
but also derived semantics and context.
There is evidence that NLP have emergent properties like a sophisticated
theory of mind:
https://www.newscientist.com/article/2359418-chatgpt-ai-passes-test-designed-to-show-theory-of-mind-in-children/
All these demonstrated that we have all the tools to create a sentient AI.
It is a matter of integrating what we have already developed and expanding
existing approaches to other type of reasoning as suggested here:
https://arxiv.org/abs/2301.06627
The AI that Blake Lemoine talked with, and claimed to be conscious (that is
an ultimate and meta version of LaMDA) is exactly what I'm describing.
Lemoine has stated that Google integrated NLP like ChatGPT with Kurzweil
hierarchical organization he described in "How to create a mind" and Jeff
Hawkins AI architecture described in "On Intelligence".
So, yes existing NLP have limitations but also demonstrate that these
limitations are a matter of computational power, how the training was
performed and being just one of the modules that is necessary for true
AGIs.
NLPs are just one slice of the brain, not the entire brain, but they do a
good job in reproducing that fundamental part of our brain for
consciousness.
They do understand, even if in a limited way at this point.
Giovanni
On Sat, Mar 18, 2023 at 2:41 AM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> I think those who think LLM AIs like ChatGPT are becoming conscious or
> sentient like humans fail to understand a very important point: these
> software applications only predict language. They are very good at
> predicting which word should come next in a sentence or question, but they
> have no idea what the words mean. They do not and cannot understand what
> the words refer to. In linguistic terms, they lack referents.
>
> Maybe you all already understand this, or maybe you have some reasons why
> I am wrong.
>
> -gts
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230320/caf97607/attachment.htm>
More information about the extropy-chat
mailing list