[ExI] all we are is just llms was: RE: e: GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Fri Apr 21 05:43:25 UTC 2023


I didn't assassinate Berger's character, she is associated with a group
(mostly women) that has a precise political agenda against AI and
transhumanism in general. I gave a link to a talk of one of their little
group (ex-AI ethicists at Google) where she makes the disgusting trope of
transhumanism == eugenics. By the way, Berger blocked me on Twitter after I
asked if she is able to code, lol.

We do have not enough info about this particular event, I don't trust what
Mitchel says given her bias and agenda but it is possible that Google's CEO
misspoken or didn't explain what happened. But I have repeated many times
that linguists like Berger didn't believe LLM could derive grammar from
just looking at patterns in the language.
But LLMs derived the rules of grammar anyway.

I know enough about emergence in complex systems that yes, I think many
higher-level behaviors not just in AIs but humans are derived from the
complex interactions of billions of connections in the neural networks. AIs
do not need to be exposed to millions of examples of chess games to learn
how to play chess, they need to practice chess with themselves with a
particular utility function, and within a day an AI can beat a human
master.
I read the technical paper of these bots that taught themselves how to play
soccer, they were not exposed to millions of examples of soccer games, they
derived the best way to play soccer (and even taught themselves how to
stand, run, and so on) by themselves by trial and error and again a
particular goal (literally) that was assigned to them.

 It is magical in the sense that life is magical but there is real physics
behind it. I'm all for rational explanations of reality and I'm a
functionalist through and through so I'm not the one invoking some magical
life force or soul to explain life or consciousness.
Giovanni


On Thu, Apr 20, 2023 at 10:28 PM Gordon Swobe <gordon.swobe at gmail.com>
wrote:

>
>
>> By the way, did you hear that a Google version of an LLM was given just a
>> few prompts in Bengali and it was able to translate after that every text
>> in Bengali despite not having had any training in Bengali?
>>
>
> You didn't answer my question, Gio. Do you really believe what you wrote
> above, that Google's LLM learned Bengali despite no training in Bengali? I
> don't know why else you would be so eager to assassinate the characters of
> those who say otherwise.
>
> The confusion here, as you would learn if you were to investigate, is that
> Bard did demonstrate the ability to translate from one language to another.
> One might say that is remarkable, but it's hardly the same as learning a
> language from nothing.
>
> GPT-4 can translate English words into C++, too. Last night I asked it to
> write a blackjack game for me. It took about five minutes.
>
> -gts
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230420/33236003/attachment.htm>


More information about the extropy-chat mailing list