[ExI] Ben Goertzel on Large Language Models
Jason Resch
jasonresch at gmail.com
Fri Apr 28 21:07:45 UTC 2023
On Thu, Apr 27, 2023, 7:20 PM Giovanni Santostasi via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
> *We are only perhaps one or two major breakthroughs in the use and
> applications of the tools that build LLMs from someone tying all of those
> layers together into something meaningfully more than the sum of it's
> parts.*This is the worst-case scenario. I think we have already all the
> pieces and they need to be put together in a consistent whole. People are
> already working on that.
> I have emphasized that my excitement about LLMs is that it seems such a
> jump from what we had before. It is relatively easy to extrapolate what
> these systems could do, with or without additional modules just by adding
> more parameters or data. If we need to train LLM like ANNs to mimic how
> another part of the brain works then it seems it can be done using similar
> methodologies. I agree LLMs are a good model of the language processing
> areas of the brain and adding and connecting other simulated brain regions
> should get us there.
>
Yes. Already we see the "LLM" learns much more than just language. It can
do math, chess, computer programming, compose musical melodies, and draw
pictures. It learned all these skills and incorporated them all into the
same model. I'm not convinced then that we really need to add anything to
the transformer model aside from some iteration/recursion and perhaps
extending it with a bit of temporary working memory for problems that take
many steps (like multiplying long numbers).
Jason
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230428/8c3e3681/attachment.htm>
More information about the extropy-chat
mailing list