[ExI] Altman says scaling can no longer improve LLM
avant at sollegro.com
Sat Apr 22 02:04:46 UTC 2023
Quoting Giovanni Santostasi <gsantostasi at gmail.com>:
> I didn't read this as saying exactly that but there are diminishing returns
> in scaling and we can improve these models in other ways that do not
> require scaling.
I agree that we can improve LLM by means other than scaling. However,
I think that the law of diminishing returns (LoDR) applies not just
LLM, but as I have stated earlier, applies to intelligence in general.
As intelligence progresses, it makes more and more logical connections
between existing data points and sees all the possible patterns in the
finite information at its disposal, but at a certain point it is
splitting hairs over minutiae. It is like intelligence can become
saturated with petty detail and thereby stagnate. At that point only
new information suffices to further increase knowledge, and that new
knowledge can only be acquired by exploration and empiricism.
> On Fri, Apr 21, 2023 at 6:15 AM Stuart LaForge via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> On the way out the door to work so I can't write a digest or
>> editorialize, but Open AI founder says GPT4 is about as good as LLM
>> can get.
>> Stuart LaForge
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
More information about the extropy-chat