[Paleopsych] Rebecca Saxe: Do the Right Thing

Premise Checker checker at panix.com
Mon Sep 19 19:43:43 UTC 2005


Rebecca Saxe: Do the Right Thing
http://bostonreview.net/BR30.5/saxe.html

First, the summary from the "Magazine and Journal Reader" feature of the daily 
bulletin from the Chronicle of Higher Education, 5.9.13
http://chronicle.com/daily/2005/09/2005091301j.htm

    A glance at the September/October issue of the Boston Review:
    Searching for morality

    Is killing one person for the sake of saving five others justifiable?
    Moral dilemmas like that one are a part of life, says Rebecca Saxe, a
    junior fellow in cognitive neuroscience at Harvard University. But,
    she asks, what causes people to judge whether something is right or
    wrong? By examining several studies, she has explored whether a
    universal concept of morality exists.

    One school of thought, she writes, suggests that human beings possess
    a "moral instinct." In Internet surveys, 89 percent of people said
    they would answer yes to the above dilemma, depending on the instance.
    It's "an impressive consensus," she writes, particularly because
    respondents were indistinguishable by race, sex, wealth, religious
    affiliation, nationality, or educational background. Still, she notes,
    Web users have a likely bias for Western culture, making it difficult
    to ascertain whether the findings represent a universal concept of
    morality, or just a Western one.

    Developmental psychologists, meanwhile, have studied preverbal infants
    to see if human beings are born with a sense of morality that is
    undeterred by cultural influences. In one exercise, researchers showed
    15-month-old toddlers movies of contrasting behavior. In one film, a
    "nice" man pushes a bag off a seat so that a girl can sit down. In
    another, a "mean" man pushes the girl off the seat to make room for
    the bag. The study showed that, afterward, babies were more likely to
    crawl to the "nice" man. The results are "interesting," Ms. Saxe says,
    because they show that, "by the time they are 1 year old, babies can
    distinguish between helpful actions and hurtful ones."

    The most important research is likely to come from studying brain
    images, she says. She mentions a study in which magnetic-resonance
    imaging was used to compare blood-oxygen levels in the brain when
    people considered sentences describing moral violations -- like "they
    hung an innocent" -- and sentences that describe unpleasant but not
    immoral actions or that make morally neutral statements -- like
    "stones are made of water." That study found a higher oxygenation
    level in one region of the brain while subjects read about moral
    violations than either of the other two kinds of sentences. The author
    of the study speculated that that region of the brain might play a
    role in moral reasoning.

    Such findings are exciting, Ms. Saxe says, but she urges caution in
    their interpretation. It remains controversial whether such a
    specialized brain region exists, she notes, and even if it did, its
    implications might not be universal.

    The article, "Do the Right Thing: Cognitive Science's Search for a
    Common Morality," is available at

    --Jason M. Breslow

      _________________________________________________________________

    Cognitive science's search for a common morality

    8 Consider the following dilemma: Mike is supposed to be the best man
    at a friend's wedding in Maine this afternoon. He is carrying the
    wedding rings with him in New Hampshire, where he has been staying on
    business. One bus a day goes directly to the coast. Mike is on his way
    to the bus station with 15 minutes to spare when he realizes that his
    wallet has been stolen, and with it his bus tickets, his credit cards,
    and all his forms of ID.

    At the bus station Mike tries to persuade the officials, and then a
    couple of fellow travelers, to lend him the money to buy a new ticket,
    but no one will do it. He's a stranger, and it's a significant sum.
    With five minutes to go before the bus's departure, he is sitting on a
    bench trying desperately to think of a plan. Just then, a well-dressed
    man gets up for a walk, leaving his jacket, with a bus ticket to Maine
    in the pocket, lying unattended on the bench. In a flash, Mike
    realizes that the only way he will make it to the wedding on time is
    if he takes that ticket. The man is clearly well off and could easily
    buy himself another one.

    Should Mike take the ticket?

    My own judgment comes down narrowly, but firmly, against stealing the
    ticket. And in studies of moral reasoning, the majority of American
    adults and children answer as I do: Mike should not take the ticket,
    even if it means missing the wedding. But this proportion varies
    dramatically across cultures. In Mysore, a city in the south of India,
    85 percent of adults and 98 percent of children say Mike should steal
    the ticket and go to the wedding. Americans, and I, justify our choice
    in terms of justice and fairness: it is not right for me to harm this
    stranger--even in a minor way. We could not live in a world in which
    everyone stole whatever he or she needed. The Indian subjects focus
    instead on the importance of personal relationships and contractual
    obligations, and on the relatively small harm that will be done to the
    stranger in contrast to the much broader harm that will be done to the
    wedding.

    An elder in a Maisin village in Papua New Guinea sees the situation
    from a third perspective, focused on collective responsibility. He
    rejects the dilemma: "If nobody [in the community] helped him and so
    he [stole], I would say we had caused that problem."

    Examples of cross-cultural moral diversity such as this one may not
    seem surprising in the 21st century. In a world of religious wars,
    genocide, and terrorism, no one is naive enough to think that all
    moral beliefs are universal. But beneath such diversity, can we
    discern a common core--a distinct, universal, maybe even innate "moral
    sense" in our human nature?

    In the early 1990s, when James Q. Wilson first published The Moral
    Sense, his critics and admirers alike agreed that the idea was an
    unfashionable one in moral psychology. Wilson, a professor of
    government and not psychology, was motivated by the problem of
    non-crime: how and why most of us, most of the time, restrain our
    basic appetites for food, status, and sex within legal limits, and
    expect others to do the same. The answer, Wilson proposed, lies in our
    universal "moralsense, one that emerges as naturally as [a] sense of
    beauty or ritual (with which morality has much in common) and that
    will affect [our] behavior, though not always, and in some cases not
    obviously."

    But the fashion in moral psychology is changing.

    A decade after Wilson's book was published, the psychological and
    neural basis of moral reasoning is a rapidly expanding topic of
    investigation within cognitive science. In the intervening years, new
    technologies have been invented, and new techniques developed, to
    probe ever deeper into the structure of human thought. We can now
    acquire vast numbers of subjects over the Internet, study previously
    inaccessible populations such as preverbal infants, and, using brain
    imaging, observe and measure brain activity non-invasively in large
    numbers of perfectly healthy adults. Inevitably, enthusiasts make
    sweeping claims about these new technologies and the old mysteries
    they will leave in their wake. ("The brain does not lie" is a common
    but odd marketing claim, since in an obvious sense, brains are the
    only things that ever do.)

    The appeal of the new methods is clear: if an aspect of reasoning is
    genuinely universal, part of the human genetic endowment, then such
    reasoning might be manifest in massive cross-cultural samples, in
    subjects not yet exposed to any culture, such as very young infants,
    and perhaps even in the biological structure of our reasoning organ,
    the brain.

    How far have these technologies come in teaching us new truths about
    our moral selves? How far could they go? And what will be the
    implications of a new biopsychological science of natural morality?
    "The truth, if it exists, is in the details," wrote Wilson, and
    therefore I will concentrate on the details of three sets of very
    recent experiments, each of which approaches the problem using a
    different method: an Internet survey, a cognitive study of infants,
    and a study of brain imaging. Each is at the cutting edge of moral
    psychology, each is promising but flawed, and each should be greeted
    with a mix of enthusiasm and interpretative caution.

    * * *

    Mike, the man we left sitting at the bus station, is in a particularly
    bad moral predicament: he must choose between two actions (stealing
    and breaking an obligation), both of which are wrong. Moral
    psychologists call cases like these "moral dilemmas." Over the last
    half century, batteries of moral dilemmas have been presented to men
    and women, adults and children, all over the world. The questions at
    the heart of these studies are these: How do people arrive at the
    moral judgment that an action, real or contemplated, is right or
    wrong? What are the rules governing these moral calculations, and from
    where do they come? Which, if any, of the fundamental components are
    universal?

    All of them, answered the eminent psychologist Lawrence Kohlberg. In
    the 1970s and 1980s, Kohlberg argued that moral reasoning is based on
    explicit rules and concepts, like conscious logical problem-solving;
    over the course of an individual's development, the rules and concepts
    that he or she uses to solve moral problems unfold in a well-defined,
    universal sequence of stages. These stages are biologically determined
    but socially supported. In early stages, moral reasoning is strongly
    influenced by external authority; in later stages, moral reasoning
    appeals first to internalized convention, and then to general
    principles of neutrality, egalitarianism, and universal rights. It may
    be that what makes one culture, one sex, or one individual different
    from another is just how high and how fast it manages to climb the
    moral ladder.

    To test this hypothesis, moral dilemmas were presented to people of
    varying ages and classes, both sexes, and many cultures (including
    people in India, Thailand, Iran, Turkey, Kenya, Nigeria, and
    Guatemala; communities of Alaskan Inuit; Tibetan Buddhist monks; and
    residents of an Israeli kibbutz). Kohlberg's key methodological
    insight was to focus not on the answers that people give to moral
    dilemmas but on how they justify their choice. A seven-year-old and a
    white-haired philosopher may agree that Mike should not steal the
    ticket, but they will differ in their explanations of why not. The
    seven-year-old may say that Mike shouldn't steal because he will get
    caught and punished, while the philosopher may appeal to an
    interpretation of Kant's categorical imperative: act only on a
    principle that you would wish everyone to follow in a similar
    situation.

    Kohlberg's claims were deeply controversial, not least because the
    highest stage of moral development was accorded almost exclusively to
    Western adults, and among those, mostly to men. Critics attacked
    everything from the specific dilemmas to the coding criteria to the
    whole philosophy of monotonic universal moral development. The
    psychologist Carol Gilligan, for example, argued that women justify
    their moral choices differently from men, but with equal
    sophistication. Men, she claimed, tend to reason about morality in
    terms of justice, and women in terms of care: "While an ethic of
    justice proceeds from the premise of equality--that everyone should be
    treated the same--an ethic of care rests on the premise of
    non-violence--that no one should be hurt." Similar arguments were made
    for non-Western cultures--that they emphasize social roles and
    obligations rather than individual rights and justice. On the whole,
    this emphasis on group differences won the day. Kohlberg's vision was
    rejected, and the psychological study of moral universals reached an
    impasse.

    Very recently, though, the use of moral dilemmas to study moral
    universals has reemerged. Marc Hauser of Harvard University and John
    Mikhail of Georgetown University are among the cognitive scientists
    leading the charge. The current theorists take as their model for
    moral reasoning not conscious problem-solving, as Kohlberg did, but
    the human language faculty. That is, rather than "moral reasoning,"
    human beings are understood to be endowed with a "moral instinct" that
    enables them to categorize and judge actions as right or wrong the way
    native speakers intuitively recognize sentences as grammatical or
    ungrammatical.

    We can draw three predictions from the theory that morality operates
    as language does. First, just as each speaker can produce and
    understand an infinite number of completely original sentences, every
    moral reasoner can make fluent, confident, and compelling moral
    judgments about an infinite number of unique cases, including ones
    that they have never imagined confronting. Second, cross-culturally,
    systems of moral reasoning can be as diverse as human languages are,
    without precluding that a universal system of rules, derived from our
    biological inheritance, underlies and governs all these surface-level
    differences. Finally, just as native speakers are often unable to
    articulate the rules of grammar that they obey when speaking, the
    practitioners of moral judgment may have great difficulty articulating
    the principles that inform their judgments. Hauser, Mikhail, and their
    colleagues have tested these predictions with a set of moral dilemmas
    originally introduced by the philosopher Phillipa Foot in 1967 and now
    known collectively as the Trolley Problems. To illustrate the
    category, let's begin with Anna, standing on the embankment above a
    train track, watching a track-maintenance team do its work. Suddenly,
    Anna hears the sound of a train barrelling down the tracks: the brakes
    have failed, and the train is heading straight for the six workers.
    Beside Anna is a lever; if she pulls it, the train will be forced onto
    a side track and will glide to a halt without killing anyone. Should
    she pull the lever?

    No moral dilemma yet. But now let's complicate the story. In the
    second scenario, Bob finds himself in the same situation, except that
    one of the six maintenance people is working on the side track. Now
    the decision Bob faces is whether to pull the lever to save five
    lives, knowing that if he does, a man who would otherwise have lived
    will be killed.

    In a third version of what is clearly a potentially infinite series,
    the sixth worker is standing beside Camilla on the embankment. The
    only way to stop the train, and save the lives of the five people on
    the track, is for Camilla to push the man beside her down onto the
    track. By pushing him in front of the train and so killing him, she
    would slow it down enough to save the others.

    Finally, for anyone not yet convinced that there are cases in which it
    is wrong to sacrifice one person in order to save five, consider Dr.
    Dina, a surgeon who has five patients each dying from the failure of a
    different organ. Should she kill one healthy hospital visitor and
    distribute the organs to her patients in order to save five lives?

    By putting scenarios like these on a Web site
    (http://moral.wjh.harvard.edu) and soliciting widely for participants,
    Hauser and his lab have collected judgments about Trolley Problems
    from thousands of people in more than a hundred countries,
    representing a broad range of ages and religious and educational
    backgrounds. The results reveal an impressive consensus. For example,
    89 percent of subjects agree that it is permissible for Bob to pull
    the lever to save five lives at the cost of one but that it is not
    permissible for Camilla to make the same tradeoff by pushing the man
    onto the track.

    More importantly, even in this enormous sample and even for
    complicated borderline cases, participants' responses could not be
    predicted by their age, sex, religion, or educational background.
    Women's choices in the scenarios overall were indistinguishable from
    men's, Jews' from Muslims' or Catholics', teenagers' from their
    parents' or grandparents'. Consistent with the analogy to language,
    these thousands of people make reliable and confident moral judgments
    for a whole series of (presumably) novel scenarios. Also
    interestingly, Hauser, Mikhail, and their colleagues found that while
    the "moral instinct" was apparently universal, people's subsequent
    justifications were not; instead, they were highly variable and often
    confused. Less than one in three participants could come up with a
    justification for the moral difference between Camilla's choice and
    Bob's, even though almost everyone shares the intuition that the two
    cases are different.

    So what can we learn from this study? Has the Internet--this new
    technology--given us a way to reveal the human universals in moral
    judgments?

    We must be cautious: Web-based experiments have some obvious
    weaknesses. While the participants may come from many countries and
    many backgrounds, they all have Internet access and computer skills,
    and therefore probably have significant exposure to Western culture.
    (In fact, although the first study included just over 6,000 people
    from more than a hundred countries, more than two thirds of them were
    from the United States.) Because the survey is voluntary, it includes
    a disproportionate number of people with a preexisting interest in
    moral reasoning. (More than two thirds had previously studied moral
    cognition or moral philosophy in some academic context, making it all
    the more surprising that they could not give clear verbal
    justifications of their intuitions.) And because subjects fill out the
    survey without supervision or compensation, sincerity and good faith
    cannot be ensured (although Hauser, Mikhail, and their colleagues did
    exclude the subjects who claimed to live in Antarctica or to have
    received a Ph.D. at 15).

    Also, this is only one study, focused on only one kind of moral
    dilemma: the Trolley Problems. So far, we don't know whether the
    universality of intuitions observed in this study would generalize to
    other kinds of dilemmas. The results of the experiment with Mike and
    the bus ticket suggest it probably would not.

    On the other hand, the survey participants did include a fairly even
    balance of sexes and ages. And the fact that sex in particular makes
    no difference to people's choices in the Trolley Problems, even in a
    sample of thousands (and growing), could be important. Remember, Carol
    Gilligan charged that Lawrence Kohlberg's theory of multi-stage moral
    development was biased toward men; she claimed that men and women
    reason about moral dilemmas with equal sophistication, but according
    to different principles. Hauser and Mikhail's Internet study lets us
    look at the controversy from a new angle. Gilligan's analysis was
    based on justifications: how men and women consciously reflect upon,
    explain, and justify the moral choices that they make. It is easy to
    imagine that the way we justify our choices depends a lot on the
    surrounding culture, on external influences and expectations. What
    Hauser and Mikhail's results suggest is that though the reflective,
    verbal aspects of moral reasoning (which Hauser and Mikhail found
    inarticulate and confused, in any case) may differ by sex, the moral
    intuition that tells us which choice is right and which wrong for Anna
    or Bob or Camilla is part of human nature, for women just as for men.

    Still, the Internet's critical weakness is intransigent. As long as
    people must have Internet access in order to participate, the sample
    will remain culturally biased, and it will be hard to know for sure
    from where the moral consensus comes: from human nature or from
    exposure to Western values. The only way to solve this problem is to
    investigate moral reasoning in people with little or no exposure to
    Western values. And cognitive scientists are beginning to do just
    that.

    * * *

    One group of experimental participants that is relatively free of
    cultural taint is preverbal infants. Before they are a year old, while
    their vocabulary consists of only a few simple concrete nouns, infants
    have presumably not yet been acculturated into the specific moral
    theories of their adult caretakers. Infant studies therefore offer
    scientists the chance to measure innate moral principles in mint
    condition. With this opportunity, of course, comes a methodological
    challenge. How can we measure complex, abstract moral judgments made
    by infants who are just beginning to talk, point, and crawl?

    To meet this challenge, developmental psychologists who study all
    areas of cognition have become adept--often ingenious--at teasing
    meaning out of one of the few behaviors that infants can do well:
    looking. Infants look longer at the things that interest them: objects
    or events that are attractive, unexpected, or new. Looking-time
    experiments therefore gauge which of two choices--two objects, people,
    or movies--infants prefer to watch.

    From just this simple tool, a surprisingly rich picture of infant
    cognition has emerged. We have learned, for example, that infants only
    a few days old prefer to look at a human face than at other objects;
    that by the time they are four months old, infants know that one
    object cannot pass through the space occupied by another object; and
    that by seven months, they know that a billiard ball will move if and
    only if it is hit by something else.

    Only recently, though, has this tool begun to be applied to the field
    of moral cognition. The questions these new studies seek to answer
    include the following: Where do we human beings get the notions of
    "right," "wrong," "permissible," "obligatory," and "forbidden"? What
    does it mean when we judge actions--our own or others'--in these
    terms? How and why do we judge some actions wrong (or forbidden) and
    not just silly, unfortunate, or unconventional?

    Not all transgressions are created equal; some undesirable or
    inappropriate actions merely violate conventions, while others are
    genuinely morally wrong. Rainy weather can be undesirable, some
    amateur acting is very bad, and raising your hand before speaking at a
    romantic candlelit dinner is usually inappropriate, but none of these
    is morally wrong or forbidden. Even a tsunami or childhood cancer,
    though both awful, are not immoral unless we consider them the actions
    of an intentional agent.

    The psychologist Elliott Turiel has proposed that the moral rules a
    person espouses have a special psychological status that distinguishes
    them from other rules--like local conventions--that guide behavior.
    One of the clearest indicators of this so-called moral-conventional
    distinction is the role of local authority.

    We understand that the rules of etiquette--whether it is permissible
    to leave food on your plate, to belch at the table, or to speak
    without first raising your hand--are subject to context, convention,
    and authority. If a friend told you before your first dinner at her
    parents' house that in her family, belching at the table after dinner
    is a gesture of appreciation and gratitude, you would not think your
    friend's father was immoral or wrong or even rude when he leaned back
    after dinner and belched--whether or not you could bring yourself to
    join in.

    Moral judgments, in contrast, are conceived (by hypothesis) as not
    subject to the control of local authority. If your friend told you
    that in her family a man beating his wife after dinner is a gesture of
    appreciation and gratitude, your assessment of that act would
    presumably not be swayed. Even three-year-old children already
    distinguish between moral and conventional transgressions. They allow
    that if the teacher said so, it might be okay to talk during nap, or
    to stand up during snack time, or to wear pajamas to school. But they
    also assert that a teacher couldn't make it okay to pull another
    child's hair or to steal her backpack. Similarly, children growing up
    in deeply religious Mennonite communities distinguish between rules
    that apply because they are written in the Bible (e.g., that Sunday is
    the day of Sabbath, or that a man must uncover his head to pray) and
    rules that would still apply even if they weren't actually written in
    the Bible (including rules against personal and material harm).

    There is one exception, though. James Blair, of the National
    Institutes of Health, has found that children classified as
    psychopaths (partly because they exhibit persistent aggressive
    behavior toward others) do not make the normal moral-conventional
    distinction. These children know which behaviors are not allowed at
    school, and they can even rate the relative seriousness of different
    offences; but they fail when asked which offences would still be wrong
    to commit even if the teacher suspended the rules. For children with
    psychopathic tendencies (and for psychopathic adults, too, though not
    for those Blair calls "normal murderers"), rules are all a matter of
    local authority. In its absence, anything is permissible.

    Turiel's thesis, then, is that healthy individuals in all cultures
    respect the distinction between conventional violations, which depend
    on local authorities, and moral violations, which do not.

    This thesis remains intensely controversial. The chief voice of
    opposition may come not from psychologists but from anthropologists,
    who argue that the special status of moral rules cannot be part of
    human nature, but is rather just a historically and culturally
    specific conception, an artifact of Western values. "When I first
    began to do fieldwork among the Shona-speaking Manyika of Zimbabwe,"
    writes Anita Jacobson-Widding, for example, "I tried to find a word
    that would correspond to the English concept `morality.' I explained
    what I meant by asking my informants to describe the norms for good
    behavior toward other people. The answer was unanimous. The word for
    this was tsika. But when I asked my bilingual informants to translate
    tsika into English, they said that it was `good manners.' And whenever
    I asked somebody to define tsika they would say `Tsika is the proper
    way to greet people.'"

    Jacobson-Widding argues that the Manyika do not separate moral
    behavior from good manners. Lying, farting, and stealing are all
    equally violations of tsika. And if manners and morals cannot be
    differentiated, the whole study of moral universals is in trouble,
    because how--as Jacobson-Widding herself asks--can we study the
    similarities and differences in moral reasoning across cultures "when
    the concept of morality does not exist?" From the perspective of
    cognitive science, this dispute over the origins of the
    moral-conventional distinction is an empirical question, and one that
    might be resolvable with the new techniques of infant developmental
    psychology.

    One possibility is that children first distinguish "wrong" actions in
    their third year of life, as they begin to recognize the thoughts,
    feelings, and desires of other people. If this is true, the special
    status of moral reasoning would be tied to another special domain in
    human cognition: theory of mind, or our ability to make rich and
    specific inferences about the contents of other people's thoughts.
    Although this link is plausible, there is some evidence that
    distinguishing moral right from wrong is a more primitive part of
    cognition than theory of mind, and can exist independently. Unlike
    psychopathic children, who have impaired moral reasoning in the
    presence of intact theory of mind, autistic children who struggle to
    infer other people's thoughts are nevertheless able to make the normal
    moral-conventional distinction.

    Another hypothesis is that children acquire the notion of "wrong"
    actions in their second year, once they are old enough to hurt others
    and experience firsthand the distress of the victim. Blair, for
    example, has proposed that human beings and social species like
    canines have developed a hard-wired "violence-inhibition mechanism" to
    restrain aggression against members of the same species. This
    mechanism is activated by a victim's signals of distress and
    submission (like a dog rolling over onto its back) and produces a
    withdrawal response. When this mechanism is activated in an attacker,
    withdrawal means that the violence stops. The class of "wrong"
    actions, those that cause the victim's distress, might be learned
    first for one's own actions and then extended derivatively to others'
    actions.

    Both of these hypotheses suggest a very early onset for the
    moral-conventional distinction. But possibly the strongest evidence
    against the anthropologists' claim that this distinction is just a
    cultural construct would come from studies of even younger children:
    preverbal infants. To this end, developmental psychologists are
    currently using the new looking-time procedures to investigate this
    provocative third hypothesis: that before they can either walk or
    talk, young infants may already distinguish between hurting (morally
    wrong) and helping (morally right).

    In one study, conducted by Valerie Kuhlmeier and her colleagues at
    Yale, infants watched a little animated ball apparently struggling to
    climb a steep hill. A triangle and a square stood nearby. When the
    ball got just beyond halfway up, one of two things happened: either
    the triangle came over and gave the ball a helpful nudge up the hill,
    or the square came over and pushed the ball back down the hill. Then
    the cycle repeated. Later, the same infants saw a new scene: across
    flat ground the little ball went to sit beside either the triangle or
    the square. Twelve-month-old infants tended to look longer when the
    ball went to sit beside the "mean" shape. Perhaps they found the
    ball's choice surprising. Would you choose to hang out with someone
    who had pushed you down a hill?

    Another study, by Emmanuel Dupoux and his colleagues in France, used
    movies of live human actors. In one, the "nice" man pushes a backpack
    off a stool and helps a crying girl get up onto the stool, comforting
    her. In the second movie, the "mean" man pushes the girl off the
    stool, and picks up and consoles the backpack. The experiment is
    designed so that the amounts of crying, pushing, and comforting in the
    two movies are roughly equal. After the movies, the infants are given
    a choice to look at, or crawl to, either the "mean" man or the "nice"
    one. At 15 months, infants look more at the mean man but crawl more to
    the nice one.

    These results are interesting, but each of these studies provides
    evidence for a fairly weak claim: by the time they are one year old,
    babies can distinguish between helpful actions and hurtful ones. That
    is, infants seem to be sensitive to a difference between actions that
    are nice, right, fortunate, or appropriate and ones that are mean,
    wrong, undesirable, or inappropriate--even for novel actions executed
    by unknown agents. On any interpretation, this is an impressive
    discovery. But the difference that infants detect need not be a moral
    difference.

    These first infant studies of morality cannot answer the critical
    question, which is not about the origin of the distinction between
    nice and mean, but between right and wrong; that is, the idea that
    some conduct is unacceptable, whatever the local authorities say.
    Eventually, infant studies may provide evidence that the concepts of
    morality and convention can be distinguished, even among the
    Manyika--that is, that a special concept of morality is part of the
    way infants interpret the world, even when they are too young to be
    influenced by culture-specific constructions. So far, though, these
    infant studies are a long way off.

    In the meantime, we will have to turn to other methods, traditional
    and modern, to adjudicate the debate between psychologists and
    anthropologists over the existence of moral universals. First, if
    Hauser and Mikhail's Internet-survey results really do generalize to a
    wider population, as the scientists hope, then we might predict that
    Manyika men and women would give the same answers that everyone else
    does to the Trolley Problems. If so, would that challenge our notions
    of how different from us they really are?

    Second, if Elliott Turiel and his colleagues are right, then even
    Manyika children should distinguish between manners, which depend on
    local custom, and morals, which do not, when asked the right kinds of
    questions. For example, according to Manyika custom, "If you are a man
    greeting a woman, you should sit on a bench, keep your back straight
    and your neck stiff, while clapping your own flat hands in a steady
    rhythm." What if we told a four-year-old Manyika child about another
    place, very far away, where both men and women are supposed to sit on
    the ground when greeting each other? Or another place where one man is
    supposed to steal another man's yams? Would the children accept the
    first "other world" but not the second? I have never met a Manyika
    four-year-old, so I cannot guess, but if so, then we would have
    evidence that the Manyika do have a moral-conventional distinction
    after all, at the level of moral judgment, if not at the level of
    moral justification.

    Finally, some modern cognitive scientists might reply, we scientists
    hold a trump card: we can now study moral reasoning in the brain.

    * * *

    In the last ten years, brain imaging (mostly functional magnetic
    resonance imaging, or fMRI) has probably exceeded all the other
    techniques in psychology combined in terms of growth rate, public
    visibility, and financial expense. The popularity of brain imaging is
    easy to understand: by studying the responses of live human brains,
    scientists seem to have a direct window into the operations of the
    mind.

    A basic MRI provides an amazingly fine-grained three-dimensional
    picture of the anatomy of soft tissues such as the gray and white
    matter (cell bodies and axons) of the brain, which are entirely
    invisible to x-rays. An fMRI also gives the blood's oxygen content in
    each brain region, an indication of recent metabolic activity in the
    cells and therefore an indirect measure of recent cell firing. The
    images produced by fMRI analyses show the brain regions in which the
    blood's oxygen content was significantly higher while the subject
    performed one task--a moral-judgment task, for example--than while the
    subject performed a different task--a non-moral-judgment task.

    Jorge Moll and his colleagues, for example, compared the blood-oxygen
    levels in the brain while subjects read different kinds of sentences:
    sentences describing moral violations ("They hung an innocent"),
    sentences describing unpleasant but not immoral actions ("He licked
    the dirty toilet"), and neutral sentences ("Stones are made of
    water"). They found that one brain region--the medial orbito-frontal
    cortex, the region just behind the space between the eyebrows--had a
    higher oxygenation level while subjects read the moral sentences than
    either of the other two kinds of sentences. Moll proposed that the
    medial orbito-frontal cortex must play some unique role in moral
    reasoning.

    In fact, this is not a new idea. In 1848 Phineas Gage was the
    well-liked foreman of a railroad-construction gang until a dynamite
    accident destroyed his medial orbito-frontal cortex (along with a few
    neighboring brain regions). Although Gage survived the accident with
    his speech, motion, and even his intelligence unimpaired, he was,
    according to family and friends, "no longer Gage": obstinate,
    irresponsible, and capricious, he was unable to keep his job, and
    later he spent seven years as an exhibit in a traveling circus. Modern
    patients with similar brain damage show the same kinds of deficits:
    they are obscene, irreverent, and uninhibited, and they show
    disastrous judgment, both personally and professionally.

    Still, the claim of a moral brain region remains controversial among
    cognitive scientists, who disagree both about whether such a brain
    region exists and what the implications would be if it did. Joshua
    Greene of Princeton University, for example, investigates brain
    activity while subjects solve Trolley Problems. He finds lots of
    different brain regions recruited--as one might imagine--including
    regions associated with reading and understanding stories, logical
    problem-solving, and emotional responsiveness. What Greene doesn't
    find is any clear evidence of a "special" region for moral reasoning
    per se.

    More broadly, even if there were a specialized brain region that
    honored the moral-conventional distinction, what would this teach us
    about that distinction's source, or universality? Many people share
    the intuition that the existence of a specialized brain region would
    provide prima facie evidence of the biological reality of the
    moral-conventional distinction. The problem is that even finding a
    specialized neural region for a particular kind of thought does not
    tell us how that region got there. We know, for example, that there is
    a brain region that becomes specially attuned to the letters of the
    alphabet that a person is able to read, but not of other alphabets;
    this does not make any one alphabet a human universal. Similarly, if
    Western minds (the only ones who participate in brain-imaging
    experiments at the moment) distinguish moral from conventional
    violations, it is not surprising that Western brains do.

    In sum, both enthusiasm and caution are in order. The discovery of a
    specialized brain region for moral reasoning will not simply resolve
    the venerable problem of moral universals, as proponents of imaging
    sometimes seem to claim. On the other hand, not every function a brain
    performs is assigned a specialized brain region. In visual cortex,
    there are specialized regions for seeing faces and human bodies, but
    there is no specialized region for recognizing chairs or shoes, just a
    general-purpose region for recognizing objects. Some distinctions are
    more important than others in the brain, whatever their importance in
    daily life. Cognitive neuroscience can tell us where on this scale the
    moral-conventional distinction falls.

    * * *

    One thing these cutting-edge studies certainly cannot tell us is the
    right answer to a moral dilemma. Cognitive science can offer a
    descriptive theory of moral reasoning, but not a normative one. That
    is, by studying infants or brains or people around the world, we may
    be able to offer an account of how people actually make moral
    decisions--which concepts are necessary, how different principles are
    weighed, what contextual factors influence the final decision--but we
    will not be able to say how people should make moral decisions.

    Cognitive scientists may eventually be able to prove that men and
    women reason about Trolley Problems with equal sophistication, that
    African infants distinguish moral rules that are independent of local
    authority from conventions that are not, and even that the infants are
    using a specialized brain region to do so. What they cannot tell us is
    whether personal and social obligations should triumph over the
    prohibition against stealing, whether Mike should steal the ticket,
    and whether in the end it would be a better world to live in if he
    did. <

    Rebecca Saxe is a junior fellow of the Harvard Society of Fellows.
    Originally published in the [22]September/October 2005 issue of Boston
    Review



More information about the paleopsych mailing list