[ExI] Ben Goertzel on Large Language Models

Giovanni Santostasi gsantostasi at gmail.com
Thu Apr 27 22:58:51 UTC 2023


Gordon,
Given Goertzel believes that we can reach AGI in a few years would you
simply concede that when we reach this level of intelligence the AGI would
be conscious if it behaves like a conscious agent or you will still move
the goal post?
Giovanni

On Thu, Apr 27, 2023 at 3:55 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> Here is GPT-4's summary of Ben's views:
>
> --
> Prompt:
> According to Ben Goertzel, we have enough of a theory of AGI already that
> it SHOULD be that clear nothing with the sort of architecture that GPT-n
> systems have could really achieve Human level AGI. Why would he say that?
>
> Completion:
> Ben Goertzel, a prominent AI researcher, has expressed concerns that the
> architecture of GPT-like systems, including me, might not be sufficient to
> achieve human-level Artificial General Intelligence (AGI). His concerns
> stem from a few key reasons:
>
> Narrow focus on language processing: GPT-like systems excel in natural
> language understanding and generation tasks. However, human cognition
> involves more than just language processing. It includes learning from
> sensory experiences, reasoning about the world, and adaptability. AGI would
> require a system that can perform a wide range of cognitive tasks across
> various domains, not just language-related ones.
>
> Lack of understanding and reasoning: While GPT-like models can generate
> coherent and contextually relevant responses, they don't truly understand
> the meaning behind words and concepts in the way humans do. They learn
> patterns from training data but don't have the ability to ground these
> patterns in real-world experiences. This lack of understanding could be a
> significant barrier to achieving AGI.
>
> Reactive nature: GPT-like systems are primarily designed to respond to
> inputs rather than to actively reason, plan, or learn from experiences. AGI
> would require systems that can proactively engage with the world and learn
> from their interactions.
>
> No intrinsic motivation or goal-driven behavior: GPT-like models don't
> have built-in motivations or goals like humans do. AGI systems would need
> some form of intrinsic motivation and goal-driven behavior to learn and
> adapt to different environments and tasks.
> --
> -gts
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/b675e5a5/attachment.htm>


More information about the extropy-chat mailing list