[ExI] did gpt4 get dumber?

Darin Sunley dsunley at gmail.com
Fri Jul 21 16:10:12 UTC 2023

TLDR: A lot of people hitting a resource-constrained ChatGPT simultaneously
would make it's responses slower, but not dumber.

There are things they could be doing that would make it dumber, including
pruning nonessential neurons post-training, or [most likely] adding
additional RLHF (reinforcement learning from human feedback) / censorship

On Fri, Jul 21, 2023 at 10:06 AM Darin Sunley <dsunley at gmail.com> wrote:

> Nah, it doesn't work that way. While you can vary the amount of compute
> you use on a training run (and the more you use, typically the better),
> once a neural network like ChatGPT4 is trained, it takes the same amount of
> compute to run a textbot exchange through it every time. You can't reduce
> that amount of compute without rebuilding the thing from the ground up with
> a different, more efficient neural architecture.
> Theoretically they could have changed the activation function in the
> neurons to something more computationally efficient 9i.e. moving from
> sigmoid or tanh to RELU), but afaik they were already using RELU, which is
> almost as efficient as you can get.
> I suppose they could be trying to carve neurons out without doing a
> ground-up retraining. That's kinda like blindly poking into your brain with
> a hot needle - they may find pieces they can prune, but they're just as
> likely to cripple it. If they're actively researching mechanical
> interpretability (trying to actually understand what the giant inscrutable
> matrices are actually doing) and using those findings to optimize ChatGPT
> 4, they could be subtly altering or damaging its performance if they apply
> those changes to the production code, but it'd be mildly surprising if
> they're doing this in production without extensive benchmarking.
> On Fri, Jul 21, 2023 at 6:59 AM spike jones via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> It felt like GPT got dumber while I was away.  Gordon suggested that I
>> got smarter, but I don’t think that is the case.  Chess players know that
>> phenomenon well: humans definitely did get better during the era of chess
>> software development, as we found out if we have old computer software:
>> what once beat us we can now trounce.
>> But… I was mostly away from all internet for about four weeks while on
>> the road, and there has been minimal GPT use for the two weeks since I
>> returned because of circumstances.  So I didn’t get smarter since the first
>> week of June, I got dumber.
>> But GPT seems to have gotten dumber still, and I have a possible
>> explanation.  OpenAI is selling subscriptions like crazy.  We can safely
>> assume they are not adding processors as fast as they are adding
>> subscribers, so it would be reasonable to assume they need to spend less
>> computing cycles per response, which would make it feel dumber than before.
>> GPT hipsters, does that sound about right?
>> spike
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230721/e537f973/attachment.htm>

More information about the extropy-chat mailing list