[ExI] Bender's Octopus (re: LLMs like ChatGPT)
Gordon Swobe
gordon.swobe at gmail.com
Thu Mar 23 21:36:01 UTC 2023
On Thu, Mar 23, 2023 at 3:20 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> On Thu, Mar 23, 2023, 4:24 PM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Bender's point is not that ChatGBT is incapable of generating sensible
>> sentences about sticks and bears. It is that these LLMs don't know the
>> meanings of any words whatsoever. Confronted with a word it has never seen,
>> it must do a statistical analysis to try to find probable next words, never
>> knowing what any of them mean.
>>
>
> You keep insisting that. But you don't address the fact that our brains
> learn meaning and understanding from tapping into what amounts to a pure
> information channel.
>
The brain is a mysterious organ and neuroscience is still in its infancy.
All I can say is that one does not learn the meaning of words only by
looking at how they are arranged in patterns, which is all these language
models do. They've machine-learned the syntax of language -- the rules that
define how these word-symbols arrange in patterns -- and can manipulate and
assemble them in patterns that follow the same rules -- but I disagree with
you that from these rules they can know the meanings of the symbols.
-gts
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/03dc6c5e/attachment.htm>
More information about the extropy-chat
mailing list