[ExI] Human-level AGI will never happen
atymes at gmail.com
Sun Jan 9 06:28:09 UTC 2022
On Sat, Jan 8, 2022 at 10:13 PM Rafal Smigrodzki via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Sat, Jan 8, 2022 at 3:38 AM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> Unstated assumption #1: human capabilities are a finite checklist that
>> can be performed by AIs, one by one, using architectures that are able to
>> inherit the capabilities of previous AIs.
> ### Well, one way of checking off all boxes of a human to-do list
Part of the assumption is that it consists of a list of boxes that can be
> Unstated assumption #2: previous AIs will not merge into humans in a way
>> that dramatically boosts "human-level" intellectual performance and thus
>> the requirements for an AI to be generally considered an AGI. Granted,
>> this is a form of moving the goalposts, but if the goal is to reach
>> whatever is thought of as "human-level" at the time, that goal has moved
>> over time before.
> ### Well, yeah, this is definitely moving the goalposts quite dramatically.
Granted. But have not these particular goalposts been in motion all along?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat