<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Apr 27, 2023, 6:26 PM Gordon Swobe via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">"To be clear -- we have enough of a theory of AGI already that it SHOULD [be] clear nothing with the sort of architecture that GPT-n systems have could really achieve HLAGI. </div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">We have these theories.</div><div dir="auto"><br></div><div dir="auto">The main one is the universal approximation theorem, which tells us with a large enough neural network, and enough training, *any* finite function can be approximated by a neural network. If human intelligence can be defined in terms of a mathematical function, then by the universal approximation theorem, we already *know* that a large enough neural network with enough training can achieve AGI.</div><div dir="auto"><br></div><div dir="auto">Then there is the notion that all intelligence rests in the ability to make predictions. The transformer architecture is likewise entirely based on making predictions. So again I don't see any reason that such architectures cannot with sufficient development produce something that everyone agrees is AGI (which is defined as an AI able to perform any mental activity a human can).</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"> But the abstract theory of AGI has not been fleshed out and articulated clearly enough in the HLAGI context. We need to articulate the intersection of abstract AGI theory with everyday human life and human-world practical tasks with sufficient clarity that only a tiny minority of AI experts will be confused enough to answer a question like this with YES ..." <br><br>-Ben Goertzel<br><br><a href="https://twitter.com/bengoertzel/status/1642802030933856258?s=20" target="_blank" rel="noreferrer">https://twitter.com/bengoertzel/status/1642802030933856258?s=20</a><br><br>-gts</div><div dir="ltr"></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">Invite Ben to check out the debates we are having here, perhaps he will join us in them. :-)</div><div dir="auto"><br></div><div dir="auto">Jason </div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><br><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Apr 27, 2023 at 3:43 PM Adrian Tymes via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Apr 27, 2023, 2:20 PM Giovanni Santostasi via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><b>"LLMs ain't AGI and can't be upgraded into AGI, though they can be components of AGI systems with real cognitive architectures and reasoning/grounding ability."<br></b>Gordon,<br>What this has to do with the grounding ability? Nothing.<br>In fact, I would agree with 90 % of the sentence (besides can't be upgraded into AGI because we don't know yet).</div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I would go further and say it is self-contradictory. If it can be a component of an AGI system, then adding the rest of the AGI system to a LLM is a considerable upgrade - and so, as an upgrade, would upgrade that LLM to an AGI.</div><div dir="auto"></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"></div>
</blockquote></div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div></div>