[ExI] ChatGPT 'Not Interesting' for creative works

Gadersd gadersd at gmail.com
Tue Mar 7 01:04:48 UTC 2023


How did you get GPT-JT to output x=-2? I reran it over ten times and it never once got the right answer.

> So imagine I had a microChatGPT and asked it to write a 2 page essay on civil rights by tomorrow morning.  It would be analogous to Deep Blue doing the calculations of 3 minutes in 18 hours, ja?  

No, the small models generate output faster than the big models. The small models are not slower version of the big models, they have completely different capabilities. You will never be able to get ChatGPT level output out of a much smaller model. It would be like trying to run modern engineering software on an Atari console: it wouldn’t be slower it just wouldn’t run at all.

> On Mar 6, 2023, at 4:10 PM, spike jones via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
>  
>  
> …> On Behalf Of Gadersd via extropy-chat
> Sent: Monday, 6 March, 2023 8:25 AM
> 
>  
> Toy models can and have been trained in parallel across consumer computers, but I think you would be disappointed in their intelligence as compared to ChatGPT.
>  
> For example I tried a 6 billion parameter model GPT-JT, accessible at https://huggingface.co/spaces/togethercomputer/GPT-JT <https://huggingface.co/spaces/togethercomputer/GPT-JT>.
> Prompt: "solve 2x+3=-1 step by step. 2x="
> Answer: "1, so x=1/2.
> 
> A:
> 
> The answer is $1”
>  
> This model was trained in parallel as you have suggested. Not very useful, is it?
> 
> 
>>  
>  
> In your example, I am getting x = -2.
>  
> But no matter, we know how to do algebra with software, and it is good at it.  
>  
> Regarding the value of a toy ChatGPT, it depends on how you look at it.  If I ask ChatGPT to write a 2 page essay on civil rights in the 20th century, it will do so in a few seconds.  So imagine I had a microChatGPT and asked it to write a 2 page essay on civil rights by tomorrow morning.  It would be analogous to Deep Blue doing the calculations of 3 minutes in 18 hours, ja?  
>  
> The real question is how do we scale ChatGPT down six orders of magnitude and make it a commercial product?  It isn’t yet what we need if a company or organization controls it and trains it.
>  
> spike
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230306/10d48957/attachment.htm>


More information about the extropy-chat mailing list