[ExI] ChatGPT 'Not Interesting' for creative works

Gadersd gadersd at gmail.com
Tue Mar 7 13:59:28 UTC 2023


I tried it the way you did and GPT-JT is still not able to generate the answer. Are you using GPT-JT or ChatGPT?

> OK well can we get a personalized parameter sets that GPT derives for us by interacting with us?  Seems like it should be creating a matrix of some kind as it learns about me and how I think the world operates.  Then we log on, it loads our personal profile matrix or something.

The transformer architecture that these models use does not support persistent memory through normal operation. Researchers are currently working on that. The best that can be done now is to give ChatGPT an overview of yourself and your previous conversations at each new conversation. This can be easily programmed to be done automatically behind the scenes, but it is strictly finite memory that must fit within the prompt. The internal matrix weights in the model are static for now.

Theoretically it is actually quite easy to give transformer language model persistent memory. All that needs to be done is to train the model while it is being used. Currently the models are trained once and fine tuned then the parameters are left static. The primary issue I think is cost. Training ChatGPT continuously for each separate user would be VERY costly, although it could be done in principle.

> On Mar 7, 2023, at 12:29 AM, spike jones via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
>  
>  
> …> On Behalf Of Gadersd via extropy-chat
> Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works
>  
> >…How did you get GPT-JT to output x=-2? I reran it over ten times and it never once got the right answer.
>  
>  
>  
> Hi Gadersd, I worded it slightly differently:
>  
> Me:  let 2x + 3 = -1. What is x?
>  
> GPT:
> To solve for x, we need to isolate x on one side of the equation.
> 
> Starting with 2x + 3 = -1:
> 
> Subtracting 3 from both sides:
> 
> 2x = -4
> 
> Dividing both sides by 2:
> 
> x = -2
> 
> Therefore, x is equal to -2.
>  
>  
> >…No, the small models generate output faster than the big models. The small models are not slower version of the big models, they have completely different capabilities. You will never be able to get ChatGPT level output out of a much smaller model. It would be like trying to run modern engineering software on an Atari console: it wouldn’t be slower it just wouldn’t run at all…
>  
>  
> OK well can we get a personalized parameter sets that GPT derives for us by interacting with us?  Seems like it should be creating a matrix of some kind as it learns about me and how I think the world operates.  Then we log on, it loads our personal profile matrix or something.  Somehow this thing just hasta learn about us to be a useful companion bot.
>  
> spike
>  
>  
>  
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230307/ff7aabcc/attachment.htm>


More information about the extropy-chat mailing list