[extropy-chat] AI design
Adrian Tymes
wingcat at pacbell.net
Tue Jun 8 07:46:18 UTC 2004
--- Eliezer Yudkowsky <sentience at pobox.com> wrote:
> The problem is expected utility maximization. I'm
> using expected utility
> maximization as my formalism because it's a very
> simple and very stable
> system, it is the unique result of various
> optimality criteria that would
> make it an attractor for any self-modifying
> optimization process that
> tended toward any of those optimality criteria and
> wasn't already an
> expected utility maximizer, and because expected
> utility maximization is so
> taken-for-granted that most people who try to build
> an AGI will not dream
> of using anything else.
Except for all the people who are using something
else. Like the efforts, however off-base, to do it
top-down. Or the ones who are trying, in essence, to
create models of a baby's consciousness and teach it
like one would a child. And so forth.
> I haven't heard anyone try
> to analyze a UFAI goal
> system dynamic other than expected utility
> maximization
Many of these efforts haven't done formal mathematical
analyses (except in the wrong places, such as the
top-down models, which can rightly be ignored), which
is probably why you haven't heard of them.
More information about the extropy-chat
mailing list