[ExI] Ben Goertzel on Large Language Models

Giovanni Santostasi gsantostasi at gmail.com
Thu Apr 27 23:06:22 UTC 2023


*We are only perhaps one or two major breakthroughs in the use and
applications of the tools that build LLMs from someone tying all of those
layers together into something meaningfully more than the sum of it's
parts.*This is the worst-case scenario. I think we have already all the
pieces and they need to be put together in a consistent whole. People are
already working on that.
I have emphasized that my excitement about LLMs is that it seems such a
jump from what we had before. It is relatively easy to extrapolate what
these systems could do, with or without additional modules just by adding
more parameters or data. If we need to train LLM like ANNs to mimic how
another part of the brain works then it seems it can be done using similar
methodologies. I agree LLMs are a good model of the language processing
areas of the brain and adding and connecting other simulated brain regions
should get us there.

Giovanni



On Thu, Apr 27, 2023 at 3:58 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

> Gordon,
> Given Goertzel believes that we can reach AGI in a few years would you
> simply concede that when we reach this level of intelligence the AGI would
> be conscious if it behaves like a conscious agent or you will still move
> the goal post?
> Giovanni
>
> On Thu, Apr 27, 2023 at 3:55 PM Gordon Swobe <gordon.swobe at gmail.com>
> wrote:
>
>> Here is GPT-4's summary of Ben's views:
>>
>> --
>> Prompt:
>> According to Ben Goertzel, we have enough of a theory of AGI already that
>> it SHOULD be that clear nothing with the sort of architecture that GPT-n
>> systems have could really achieve Human level AGI. Why would he say that?
>>
>> Completion:
>> Ben Goertzel, a prominent AI researcher, has expressed concerns that the
>> architecture of GPT-like systems, including me, might not be sufficient to
>> achieve human-level Artificial General Intelligence (AGI). His concerns
>> stem from a few key reasons:
>>
>> Narrow focus on language processing: GPT-like systems excel in natural
>> language understanding and generation tasks. However, human cognition
>> involves more than just language processing. It includes learning from
>> sensory experiences, reasoning about the world, and adaptability. AGI would
>> require a system that can perform a wide range of cognitive tasks across
>> various domains, not just language-related ones.
>>
>> Lack of understanding and reasoning: While GPT-like models can generate
>> coherent and contextually relevant responses, they don't truly understand
>> the meaning behind words and concepts in the way humans do. They learn
>> patterns from training data but don't have the ability to ground these
>> patterns in real-world experiences. This lack of understanding could be a
>> significant barrier to achieving AGI.
>>
>> Reactive nature: GPT-like systems are primarily designed to respond to
>> inputs rather than to actively reason, plan, or learn from experiences. AGI
>> would require systems that can proactively engage with the world and learn
>> from their interactions.
>>
>> No intrinsic motivation or goal-driven behavior: GPT-like models don't
>> have built-in motivations or goals like humans do. AGI systems would need
>> some form of intrinsic motivation and goal-driven behavior to learn and
>> adapt to different environments and tasks.
>> --
>> -gts
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/4926373d/attachment.htm>


More information about the extropy-chat mailing list