[ExI] Meta question

Keith Henson hkeithhenson at gmail.com
Fri Aug 19 15:22:08 UTC 2016


On Fri, Aug 19, 2016 at 12:02 AM, Anders Sandberg <anders at aleph.se> wrote:
> On 2016-08-19 04:47, rex wrote:

snip

>> Agreed. As a mathematician in a former life, here's my whack at it: If a
>> behavior maximizes utility then it's rational. Otherwise, it's irrational.
>
> However, even that requires clearer specification. Utility to the behaving
> entity, I assume, not to the species or world (in which case we need to deal
> with the weirdness of population ethics:
> https://www.uclouvain.be/cps/ucl/doc/etes/documents/One_More_Axiological_Impossibility_Theorem_in_Logic_Ethics_and_All_that.pdf
> ). And then comes the issue of how well the entity (1) can maximize utility,
> (2) recognize that this is a maximum, and (3) what it maximizes.

In the particular interacting system of humans and their genes, one of
the entities, the genes, doesn't think at all.  They still tend to
maximize utility over evolutionary time (or go extinct).  What genes
have done is evolve conditional human behaviors in response to
commonly reoccurring situations, such as the failure of the ecosystem
to feed the population.  From the perspective of an individual human,
making war on neighbors is not rational, the typical outcome is no
better than half the tribe starving.  But due to the human practice of
taking the young women (and their genes) as booty, from the gene's
perspective, such wars are substantially better in terms of gene
survival than starving.

It's a bleak realization that evolution has wired us up this way.  But
it does explain the popularity of one of the candidates this year, not
to mention a lot of historical events.

Keith


> (1) and (2) are Caplan's instrumental and epistemic rationality. It is worth
> noting that many popular models of behavior like reinforcement learning
> involves "exploration actions" that serve the purpose of figuring out the
> utility better but do not in themselves produce better utility; they are
> instrumentally irrational but epistemically rational, a kind of opposite of
> Caplan's rational irrationality (irrational rationality?).
>
> (3) is the reason the whiteboards around our offices are full of equations:
> the AI safety guys are analysing utility functions and decision theories
> endlessly. Do you maximize expected utility? Or try to minimize maximal
> losses? Over this world, or across all possible worlds? Is the state of the
> agent part of the utility function? And so on. It is not clear what kind of
> rationality is required to select a utility function or decision theory.
>
> One can define the intelligence of agents as their ability to get rewards
> when encountering new challenges in nearly arbitrary environments; in the
> above utility sense this is also a measure of their rationality. Even then
> there are super-rational agents that are irrational by our standards. Marcus
> Hutter's AIXI famously is as smart or smarter than any other agent, yet it
> does not believe that it exists even when provided endless evidence.
>
> It makes sense to speak about rationality in the same way it makes sense to
> speak about wealth - it works as a loose general concept, but when you dig
> into specifics things become messy (if someone owes the bank 10 billion,
> does that mean he is poor or more or less owns the bank? the guy who ignores
> his health because he wants to study higher things, is he following his
> higher order desires or just being irrational? When HAL decides to get rid
> of astronauts since they are a threat to a successful mission, is that a
> rational decision or a sign that HAL is broken?).
>
>
> --
> Dr Anders Sandberg
> Future of Humanity Institute
> Oxford Martin School
> Oxford University
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat



More information about the extropy-chat mailing list