[ExI] Human-level AGI will never happen
atymes at gmail.com
Sat Jan 8 08:36:28 UTC 2022
Unstated assumption #1: human capabilities are a finite checklist that can
be performed by AIs, one by one, using architectures that are able to
inherit the capabilities of previous AIs.
Unstated assumption #2: previous AIs will not merge into humans in a way
that dramatically boosts "human-level" intellectual performance and thus
the requirements for an AI to be generally considered an AGI. Granted,
this is a form of moving the goalposts, but if the goal is to reach
whatever is thought of as "human-level" at the time, that goal has moved
over time before.
On Fri, Jan 7, 2022 at 5:51 PM Rafal Smigrodzki via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> AGI must be able to match the average human in all intellectual endeavors
> to be worth being called a human-level AGI. Various aspects of human
> cognition have been solved by narrow AIs in the past 25 years and year by
> year the number of tasks where humans still beat AI is getting smaller. In
> many of these narrow tasks the AI doesn't just match human ability but
> rather it beats humans by completely inhuman margins. The first AI that
> checks off the last box on the list of human capabilities to beat will be
> the AGI, the holy grail - but most of the capabilities it inherited from
> earlier iterations will be strongly superhuman. So the first AGI will
> actually be the first superhuman AGI, not human-level AGI. To bring it down
> to human level you would have to handicap it harshly, and I can't think of
> a reasonable use-case for such a digital cripple.
> Therefore, human-level AGI will never happen. QED.
> Unless it is made as some sort of a sick joke.
> Rafal Smigrodzki, MD-PhD
> Schuyler Biotech PLLC
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat