<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=utf-8"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
span.EmailStyle18
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=EN-US link=blue vlink=purple style='word-wrap:break-word'><div class=WordSection1><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><div style='border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in'><p class=MsoNormal><b>…</b>> <b>On Behalf Of </b>Darin Sunley via extropy-chat<br><b>Subject:</b> Re: [ExI] ChatGPT 'Not Interesting' for creative works<o:p></o:p></p></div><p class=MsoNormal><o:p> </o:p></p><div><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>>… ChatGPT3 has ~175 billion parameters. Training it requires datacenters of computing power. But the model itself will fit into a relatively small number of desktop PCs, even without compression. I'm pretty sure the model itself can be compressed to where paths through it will fit in the memory of a beefy desktop…<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Cool, that was my intuition from a person who watched in realtime as Deep Blue the chess program which ran on a supercomputer was taken out of service almost immediately after it defeated the carbon unit Kasparov. We couldn’t figure out why until my computer jockey friend told me IBM didn’t want its big iron to be defeated by a desktop computer. I wasn’t sure I believed it until I followed thru Deep Blue’s games against Gary, then compared them with the stuff the desktops were playing less than five years later. I realized it was the same level of play. <o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>But even before five years, whatever magic Deep Blue was calculating could have been done with a few desktops running in parallel and given more time.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Darin’s theory gives me an idea: we could get an ExI team together and let our computers collectively train a micro-ChatGPT using the pooled computing resources of a dozen of us. Then we take on a similar uGPT trained by Mensa or the Prime95 group in a game of Jeopardy or something.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>spike<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p></div></div></body></html>