[ExI] Ben Goertzel on Large Language Models

Jason Resch jasonresch at gmail.com
Fri Apr 28 20:20:02 UTC 2023


On Thu, Apr 27, 2023, 6:26 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> "To be clear -- we have enough of a theory of AGI already that it SHOULD
> [be] clear nothing with the sort of architecture that GPT-n systems have
> could really achieve HLAGI.
>

We have these theories.

The main one is the universal approximation theorem, which tells us with a
large enough neural network, and enough training, *any* finite function can
be approximated by a neural network. If human intelligence can be defined
in terms of a mathematical function, then by the universal approximation
theorem, we already *know* that a large enough neural network with enough
training can achieve AGI.

Then there is the notion that all intelligence rests in the ability to make
predictions. The transformer architecture is likewise entirely based on
making predictions. So again I don't see any reason that such architectures
cannot with sufficient development produce something that everyone agrees
is AGI (which is defined as an AI able to perform any mental activity a
human can).


But the abstract theory of AGI has not been fleshed out and articulated
> clearly enough in the HLAGI context. We need to articulate the intersection
> of abstract AGI theory with everyday human life and human-world practical
> tasks with sufficient clarity that only a tiny minority of AI experts will
> be confused enough to answer a question like this with YES ..."
>
> -Ben Goertzel
>
> https://twitter.com/bengoertzel/status/1642802030933856258?s=20
>
> -gts
>


Invite Ben to check out the debates we are having here, perhaps he will
join us in them. :-)

Jason


>
>
>
> On Thu, Apr 27, 2023 at 3:43 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Thu, Apr 27, 2023, 2:20 PM Giovanni Santostasi via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> *"LLMs ain't AGI and can't be upgraded into AGI, though they can be
>>> components of AGI systems with real cognitive architectures and
>>> reasoning/grounding ability."*Gordon,
>>> What this has to do with the grounding ability? Nothing.
>>> In fact, I would agree with 90 % of the sentence (besides can't be
>>> upgraded into AGI because we don't know yet).
>>>
>>
>> I would go further and say it is self-contradictory.  If it can be a
>> component of an AGI system, then adding the rest of the AGI system to a LLM
>> is a considerable upgrade - and so, as an upgrade, would upgrade that LLM
>> to an AGI.
>>
>>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230428/d98ed825/attachment.htm>


More information about the extropy-chat mailing list