[ExI] Human-level AGI will never happen

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Sun Jan 9 06:11:35 UTC 2022

On Sat, Jan 8, 2022 at 3:38 AM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Unstated assumption #1: human capabilities are a finite checklist that can
> be performed by AIs, one by one, using architectures that are able to
> inherit the capabilities of previous AIs.

### Well, one way of checking off all boxes of a human to-do list, whether
finite or infinite, is to have a faithful emulation of the doer inside the
AI. AI can interrogate its model human to give human level answers to all
challenges if needed, without giving up its various superhuman

> Unstated assumption #2: previous AIs will not merge into humans in a way
> that dramatically boosts "human-level" intellectual performance and thus
> the requirements for an AI to be generally considered an AGI.  Granted,
> this is a form of moving the goalposts, but if the goal is to reach
> whatever is thought of as "human-level" at the time, that goal has moved
> over time before.

### Well, yeah, this is definitely moving the goalposts quite dramatically.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220109/03847584/attachment.htm>

More information about the extropy-chat mailing list