[ExI] Paul vs Eliezer

Jason Resch jasonresch at gmail.com
Tue Apr 5 04:14:22 UTC 2022


You raise many interesting questions.

I think when it comes to motivational questions, a general and
self-improving intelligence, will devote some resources to attempt to
learn, adapt and grow, as otherwise it will eventually be outclassed by
super intelligneces that do these things.

Accordingly, such superintelligneces will would eventually come to realize
the common motivation that underlies all conscious life, which is the basic
utilitarian ideal, that ultimately all value/utility/meaning/purpose
derives from conscious experience. ( I give further justification for this
idea here: https://alwaysasking.com/what-is-the-meaning-of-life/ )

Accordingly, all conscious beings, whether they be animal, human, alien, or
machine, share this common value and motive. We all seek to create more and
better conscious experiences, and to explore a greater variety and
diversity of states of conscious experience.

Therefore, what do we have to fear whether machine or human intelligence is
at the helm, when the ultimate goal is the same?

This motive does not preclude a super intelligence from converting Earth
into a simulated virtual heaven and transporting life forms there, nor from
replacing its existing flawed life forms with more optimal ones, but it
should preclude an AI from replacing all life with paperclips.

The net result then should be a more positive, more perfect world, based on
motives and values that are already universal to all conscious life.


On Mon, Apr 4, 2022, 8:59 PM Rafal Smigrodzki via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I posted this comment on Astral Codex Ten, regarding the debate between
> Paul Christiano and Eliezer Yudkowsky:
> I feel that both Paul and Eliezer are not devoting enough attention to the
> technical issue of where does AI motivation come from. Our motivational
> system evolved over millions of years of evolution and now its core tenet
> of fitness maximization is being defeated by relatively trivial changes in
> the environment, such as availability of porn, contraception and social
> media. Where will the paperclip maximizer get the motivation to make
> paperclips? The argument that we do not know how to assure "good" goal
> system survives self-modification cuts two ways: While one way for the AI's
> goal system to go haywire may involve eating the planet, most
> self-modifications would presumably result in a pitiful mess, an AI that
> couldn't be bothered to fight its way out of a wet paper bag. Complicated
> systems, like the motivational systems of humans or AIs have many failure
> modes, mostly of the pathetic kind (depression, mania, compulsions, or the
> forever-blinking cursor, or the blue screen) and only occasionally dramatic
> (a psychopath in control of the nuclear launch codes).
> AI alignment research might learn a lot from fizzled self-enhancing AIs,
> maybe enough to prevent the coming of the Leviathan, if we are lucky.
> It would be nice to be able to work out the complete theory of AI
> motivation before the FOOM but I doubt it will happen. In practice, AI
> researchers should devote a lot of attention to analyzing the details of AI
> motivation at the already existing levels, and some tinkering might help us
> muddle through.
> --
> Rafal Smigrodzki, MD-PhD
> Schuyler Biotech PLLC
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220405/bfc3a36e/attachment.htm>

More information about the extropy-chat mailing list