[ExI] chatbots and marketing: was RE: India and the periodic table
spike at rainier66.com
spike at rainier66.com
Fri Jun 9 01:51:29 UTC 2023
From: extropy-chat <extropy-chat-bounces at lists.extropy.org> On Behalf Of Gadersd via extropy-chat
Sent: Thursday, 8 June, 2023 6:29 PM
To: ExI chat list <extropy-chat at lists.extropy.org>
Cc: Gadersd <gadersd at gmail.com>
Subject: Re: [ExI] chatbots and marketing: was RE: India and the periodic table
In any case… it is easy enough to see a GPT-toolkit coming, so that everyone can try training one’s own chatbot by choosing the training material carefully.
There is a huge difference between inference and training. You are correct that the masses can run some of the smaller models on CPUs and consumer GPUs. But that is just inference. Training these models requires much more compute. However, there have been quite a few quantization hacks that may enable training on low end hardware. I’m not sure how significant the tradeoff will be but the community has surprised me with the tricks it has come up with.
The wisdom used to be that one had to have a $12000 GPU to do anything interesting, but ever since the llama.cpp guy got a model running on a MacBook I think any declaration of hard limitations should be thrown out.
We may very well see an explosion of user trained models that blow up the internet…
Hi Gadersd,
Most of us have unused background processor power that is unused. For many years I ran a math program called Prime95, where we were collectively searching for the next Mersenne prime. There was SETI at home, which was analyzing signals from deep space looking for ET, but in all that, for about the past 30 years, I theorized that eventually the killer app would show up. It would be something that takes mind-boggling amounts of CPU cycles to calculate, but something that is well-suited for doing in the background on many parallel processors. Custom chatbots are an example (I think) of that magic process I have been anticipating for nearly three decades.
Since I think you know more about the computing requirements to do the training than I do, I ask you: given a pool of a million volunteers with ordinary consumer-level CPUs but willing to allow the processes to run in the background, could not anyone with a pile of training material use that background computing resource and create a custom GPT?
Next question for Gad or anyone else: is it not clear there is a huge market available? I gave one example: a chat-bot trained on the books in the fish bowl, all of it in the 200 class in the old Dewey decimal system. The result isn’t any good for any question outside what is found in the 200 class, but plenty of market exists and will persist, that is answered by the 200 class.
Gad or anyone, could we use volunteer background computing resources the way Prime95 has been doing all these years?
spike
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230608/c1068378/attachment.htm>
More information about the extropy-chat
mailing list