[ExI] To achieve AGI, we need new perspectives on intelligence

John Grigg possiblepaths2050 at gmail.com
Wed Aug 18 07:18:35 UTC 2021

“A human AGI without a body is bound to be, for all practical purposes, a
disembodied ‘zombie’ of sorts, lacking genuine understanding of the world
(with its myriad forms, natural phenomena, beauty, etc.) including its
human inhabitants, their motivations, habits, customs, behavior, etc. the
agent would need to fake all these,” Raghavachary writes."Accordingly, an
embodied AGI system would need a body that matches its brain, and both need
to be designed for the specific kind of environment it will be working in.

“We, made of matter and structures, directly interact with structures,
whose phenomena we ‘experience’. Experience cannot be digitally computed—it
needs to be actively acquired via a body,” Raghavachary said. “To me, there
is simply no substitute for direct experience.”

In a nutshell, the considered response theory suggests that suitable
pairings of synthetic brains and bodies that directly engage with the world
should be considered life-like, and appropriately intelligent,
and—depending on the functions enabled in the hardware—possibly conscious.

This means that you can create any kind of robot and make it intelligent by
equipping it with a brain that matches its body and sensory experience.

“Such agents do not need to be anthropomorphic—they could have unusual
designs, structures and functions that would produce intelligent behavior
alien to our own (e.g., an octopus-like design, with brain functions
distributed throughout the body),” Raghavachary said. “That said, the most
relatable human-level AI would likely be best housed in a human-like agent.”

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20210818/c72985b9/attachment.htm>

More information about the extropy-chat mailing list