[ExI] Altman says scaling can no longer improve LLM

Stuart LaForge avant at sollegro.com
Sat Apr 22 02:04:46 UTC 2023


Quoting Giovanni Santostasi <gsantostasi at gmail.com>:

> Stuart,
> I didn't read this as saying exactly that but there are diminishing returns
> in scaling and we can improve these models in other ways that do not
> require scaling.
> Giovanni

I agree that we can improve LLM by means other than scaling. However,  
I think that the law of diminishing returns (LoDR) applies not just  
LLM, but as I have stated earlier, applies to intelligence in general.  
As intelligence progresses, it makes more and more logical connections  
between existing data points and sees all the possible patterns in the  
finite information at its disposal, but at a certain point it is  
splitting hairs over minutiae. It is like intelligence can become  
saturated with petty detail and thereby stagnate. At that point only  
new information suffices to further increase knowledge, and that new  
knowledge can only be acquired by exploration and empiricism.

Stuart LaForge


> On Fri, Apr 21, 2023 at 6:15 AM Stuart LaForge via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> On the way out the door to work so I can't write a digest or
>> editorialize, but Open AI founder says GPT4 is about as good as LLM
>> can get.
>>
>>
>> https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/
>>
>> Stuart LaForge
>>
>>
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>




More information about the extropy-chat mailing list