[ExI] ChatGPT 'Not Interesting' for creative works

spike at rainier66.com spike at rainier66.com
Mon Mar 6 21:10:14 UTC 2023


 

 

…> On Behalf Of Gadersd via extropy-chat
Sent: Monday, 6 March, 2023 8:25 AM



 

Toy models can and have been trained in parallel across consumer computers, but I think you would be disappointed in their intelligence as compared to ChatGPT.

 

For example I tried a 6 billion parameter model GPT-JT, accessible at https://huggingface.co/spaces/togethercomputer/GPT-JT.

Prompt: "solve 2x+3=-1 step by step. 2x="

Answer: "1, so x=1/2.

A:

The answer is $1”

 

This model was trained in parallel as you have suggested. Not very useful, is it?





…

 

 

In your example, I am getting x = -2.

 

But no matter, we know how to do algebra with software, and it is good at it.  

 

Regarding the value of a toy ChatGPT, it depends on how you look at it.  If I ask ChatGPT to write a 2 page essay on civil rights in the 20th century, it will do so in a few seconds.  So imagine I had a microChatGPT and asked it to write a 2 page essay on civil rights by tomorrow morning.  It would be analogous to Deep Blue doing the calculations of 3 minutes in 18 hours, ja?  

 

The real question is how do we scale ChatGPT down six orders of magnitude and make it a commercial product?  It isn’t yet what we need if a company or organization controls it and trains it.

 

spike

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230306/6dfaf85f/attachment.htm>


More information about the extropy-chat mailing list