[ExI] all we are is just llms was: RE: e: GPT-4 on its inability to solve the symbol grounding problem

BillK pharos at gmail.com
Fri Apr 21 00:49:06 UTC 2023


On Fri, 21 Apr 2023 at 01:33, Giovanni Santostasi via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> Spike,
> By the way, did you hear that a Google version of an LLM was given just a few prompts in Bengali and it was able to translate after that every text in Bengali despite not having had any training in Bengali?
> These systems seem to have crazy emergent properties and unexpected capabilities.
> Very interesting times.
> Giovanni
> _______________________________________________


That example was a wild exaggeration / lie.
The LLM had already been trained in Bengali.
Explained in this article -
<https://www.thedailybeast.com/60-minutes-made-a-shockingly-wrong-claim-about-googles-ai-chatbot-bard>
Quote:
PaLM was already trained with Bengali, the predominant language of
Bangladesh. Margaret Mitchell (no relation), a researcher at AI
startup lab HuggingFace and formerly of Google, explained this in a
tweet thread making the argument for why 60 Minutes was wrong.

Mitchell pointed out that, in a 2022 demo, Google showed that PaLM
could communicate and respond to prompts in Bengali. The paper behind
PaLM revealed on a datasheet that the model was indeed trained in the
language with roughly 194 million tokens in the Bengali alphabet.

So it didn't magically learn anything via a single prompt. It already
knew the language.
---------------------

BillK


More information about the extropy-chat mailing list