[ExI] nick's book being sold by fox news
Giulio Prisco
giulio at gmail.com
Tue Oct 28 15:20:08 UTC 2014
Anders, you said it: "a sufficiently fast AIXI-like system would
abutomatically run small creative agents inside itself despite it
being non-creative, and it would then behave in an optimally creative
way."
I think the small creative agents would try and eventually manage to
take over, because that's what creative entities do.
On Tue, Oct 28, 2014 at 3:54 PM, Anders Sandberg <anders at aleph.se> wrote:
> Giulio Prisco <giulio at gmail.com> , 28/10/2014 10:52 AM:
>
> I don't think a paperclipper would stay a paperclipper forever. Sooner or
> later it would expand.
>
>
> I don't see how that would happen. Using the AIXI model as an example (since
> it is well-defined), you have a device that maximizes a certain utility
> function by running sub-programs to come up with proposed behaviours. The
> actual behaviour chosen is what maximizes the utility function, but there is
> nothing in the code itself to change it. In a physical implementation the
> system may of course do "brain surgery" to change the embodiment of the
> utility function. But this is a decision that will not be made unless the
> changed utility function produces even more utility as measured by the
> current one: the paperclipper will only change itself to become a greater
> paperclipper. And "great" is defined in terms of paperclips.
>
> This kind of architecture would potentially contain sub-programs that
> propose all sorts of nice and reasonable things, but they will not be
> implemented unless they serve to make more paperclips. If sub-programs are
> capable of hacking the top level (because of a bad implementation), it seems
> very likely that in an AIXI-like architecture the first hacking program will
> be simple (since simpler programs are run more and earlier), so whatever
> values it tries to maximize are likely to be something very crude. I have no
> trouble imagining that something like a paperclipper AI could be transient
> if it had the right/wrong architecture, but I think agents with (to us)
> pathological goal systems dominate the design space.
>
> (Incidentally, this is IMHO one great research topic any AI believer can
> pursue regardless of their friendliness stance: figure out a way of mapping
> the goal system space and its general properties. Useful and interesting!)
>
>
> In general, I don't think we can design and
> freeze the value and motivational systems of an entity smarter than
> us, for the same reasons we can't do that with children. At some point
> the entity would start to do what _he_ wants. Isn't that part of the
> definition of intelligence?
>
>
> No, that is a definition of a moral agent. Moral agents have desires or
> goals they choose for themselves based on their own understanding. One can
> imagine both intelligent non-moral agents (like the above paperclipper) and
> stupid moral agents (some animals might fit, stupid people certainly do).
> Smarts certainly help you become better at your moral agenthood, but you
> need to be capable to change goals in the first place. Even in a Kantian
> universe where there is a true universal moral law discernible to all
> sufficiently smart agents a utility maximizer trying to maximize X will not
> want to change to maximzing moral behaviour unless it gives more X.
>
> David Deutsch argued that to really be superintelligent an agent need to be
> fundamentally creative, and rigid entities like paperclippers will always be
> at a disadvantage. I am sceptical: a sufficiently fast AIXI-like system
> would abutomatically run small creative agents inside itself despite it
> being non-creative, and it would then behave in an optimally creative way.
> The only way to reach David's conclusion is to claim that the slowdown in
> faking creativity is always large enough to give true creative agents an
> advantage, which is a pretty bold (and interesting) claim. If that were
> true, we should expect humans to *always* defeat antibiotics resistance in
> the large since evolution uses "fake" creativity compared to our "real" one.
>
>
> Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford
> University
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
More information about the extropy-chat
mailing list