[ExI] Ben Goertzel on Large Language Models

Giovanni Santostasi gsantostasi at gmail.com
Fri Apr 28 06:35:46 UTC 2023


* grounding in the sense meant by philosophers*Philosophers are completely
useless in this discussion. Science become the dominant form of knowledge
since the Renaissance, simply because philosophy is not evidence-based and
is just the opinion of some individual.
I have already made the point you can have a grounding in symbols as Eco
proposed.
Anyway, this is not relevant.
You are simply begging the question by repeating that the AI cannot have a
self of awareness or real intelligence because it has no
physical referents.
Also, you are insulting a person that you don't know that actually is very
balanced and reasonable in his demeanor and statements. I would be
scared and concerned for you to be on some committee that will have to
decide if an AI is conscious or not because it seems you will lean towards
denying rights to these new minds.
Fortunately, you are not but this highlights the importance of having
noncentralized development of AI.

Giovanni



On Thu, Apr 27, 2023 at 10:30 PM Gordon Swobe <gordon.swobe at gmail.com>
wrote:

> On Thu, Apr 27, 2023 at 11:06 PM Giovanni Santostasi <
> gsantostasi at gmail.com> wrote:
>
>> Gordon,
>> Please listen to this video.
>> At the end (33''), Lemoine explicitly addresses the people that say these
>> models only predict the next word. He says it is technically incorrect and
>> he explains why. Notice he even uses the word "grounded" to explain that
>> these systems actually have other knowledge sources to infer the best way
>> to answer a query.
>>
>> https://www.youtube.com/watch?v=d9ipv6HhuWM
>>
>
> I had already watched it and did not notice any intelligent discussion
> about the grounding problem, so I went back and watched it again at the
> section you cited. His words to the effect of "being grounded in other
> informational backends, knowledge graphs [etc]" is not what is meant by
> grounding in the sense meant by philosophers. Grounding is about how
> symbols are grounded in or to experience, not merely to yet more symbolic
> information.
>
> By the way, I can see why some people suggested he seek help from mental
> health professionals, and why Google was inclined to let him go. As I
> understand the story, he went to his superiors or to HR and pleaded on the
> behalf of his little friend inside the computer who supposedly has real
> emotions and a soul.
>
> -gts
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/68494629/attachment.htm>


More information about the extropy-chat mailing list