[ExI] Altman says scaling can no longer improve LLM

Giovanni Santostasi gsantostasi at gmail.com
Fri Apr 21 13:25:07 UTC 2023

I didn't read this as saying exactly that but there are diminishing returns
in scaling and we can improve these models in other ways that do not
require scaling.

On Fri, Apr 21, 2023 at 6:15 AM Stuart LaForge via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On the way out the door to work so I can't write a digest or
> editorialize, but Open AI founder says GPT4 is about as good as LLM
> can get.
> https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/
> Stuart LaForge
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230421/17736fde/attachment.htm>

More information about the extropy-chat mailing list