[ExI] Human-level AGI will never happen

Will Steinberg steinberg.will at gmail.com
Sat Jan 8 05:19:57 UTC 2022

Yeah but the first superhuman-level AGI will probably ask the second
superhuman-level AGI to prom and get rejected

On Fri, Jan 7, 2022 at 8:50 PM Rafal Smigrodzki via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> AGI must be able to match the average human in all intellectual endeavors
> to be worth being called a human-level AGI. Various aspects of human
> cognition have been solved by narrow AIs in the past 25 years and year by
> year the number of tasks where humans still beat AI is getting smaller. In
> many of these narrow tasks the AI doesn't just match human ability but
> rather it beats humans by completely inhuman margins. The first AI that
> checks off the last box on the list of human capabilities to beat will be
> the AGI, the holy grail - but most of the capabilities it inherited from
> earlier iterations will be strongly superhuman. So the first AGI will
> actually be the first superhuman AGI, not human-level AGI. To bring it down
> to human level you would have to handicap it harshly, and I can't think of
> a reasonable use-case for such a digital cripple.
> Therefore, human-level AGI will never happen. QED.
> Unless it is made as some sort of a sick joke.
> --
> Rafal Smigrodzki, MD-PhD
> Schuyler Biotech PLLC
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220108/c0823c9f/attachment.htm>

More information about the extropy-chat mailing list