[Paleopsych] NYTBR: 'The Ethical Brain': Mind Over Gray Matter
Premise Checker
checker at panix.com
Sat Jun 18 23:27:00 UTC 2005
'The Ethical Brain': Mind Over Gray Matter
New York Times Book Review, 5.6.19
http://www.nytimes.com/2005/06/19/books/review/19SATELL.html
[First chapter appended.]
THE ETHICAL BRAIN
By Michael S. Gazzaniga.
201 pp. Dana Press. $25.
By SALLY SATEL
TOM WOLFE was so taken with Michael S. Gazzaniga's ''Social Brain''
that not only did he send Gazzaniga a note calling it the best book on
the brain ever written, he had Charlotte Simmons's Nobel Prize-winning
neuroscience professor recommend it in class. In ''The Ethical
Brain,'' Gazzaniga tries to make the leap from neuroscience to
neuroethics and address moral predicaments raised by developments in
brain science. The result is stimulating, very readable and at its
most edifying when it sticks to science.
As director of the Center of Cognitive Neuroscience at Dartmouth
College and indefatigable author of five previous books on the brain
for the general reader alone, Gazzaniga is less interested in
delivering verdicts on bioethical quandries -- should we clone? tinker
with our babies' I.Q.? -- than in untangling how we arrive at moral
and ethical judgments in the first place.
Take the issue of raising intelligence by manipulating genes in
test-tube embryos. Gazzaniga asks three questions. Is it technically
possible to pick out ''intelligence genes''? If so, do those genes
alone determine intelligence? And finally, is this kind of
manipulation ethical? ''Most people jump to debate the final
question,'' he rightly laments, ''without considering the implications
of the answers to the first two.'' Gazzaniga's view is that someday it
will be possible to tweak personality and intelligence through genetic
manipulation. But because personhood is so significantly affected by
factors like peer influence and chance, which scientists can't
control, we won't be able to make ''designer babies,'' nor, he
believes, will we want to.
Or consider what a ''smart pill'' might do to old-fashioned sweat and
toil. Gazzaniga isn't especially worried. Neither a smart pill nor
genetic manipulation will get you off the hook: enhancement might
enable you to grasp connections more easily; still, the fact remains
that ''becoming an expert athlete or musician takes hours of practice
no matter what else you bring to the task.''
But there are ''public, social'' implications. Imagine basketball
stars whose shoes bear the logo not of Nike or Adidas but of Wyeth or
Hoffman-La Roche, ''touting the benefits of their neuroenhancing
drugs.'' ''If we allow physical enhancements,'' Gazzaniga argues,
''some kind of pharmaceutical arms race would ensue and the whole
logic of competition would be neutralized.'' Gazzinga has no doubt
that ''neuroscience will figure out how to tamper'' with neurochemical
and genetic processes. But, he says, ''I remain convinced that
enhancers that improve motor skills are cheating, while those that
help you remember where you put your car keys are fine.''
So where, as Gazzaniga asks, ''do the hard-and-fast facts of
neuroscience end, and where does ethics begin?'' In a chapter aptly
called ''My Brain Made Me Do It,'' Gazzaniga puts the reader in the
jury box in the case of a hypothetical Harry and ''a horrible event.''
This reader confesses impatience with illuminated brain scans
routinely used to show that people ''addicted'' to drugs -- or food,
sex, the Internet, gambling -- have no control over their behavior.
Refreshingly, Gazzaniga declares ''the view of human behavior offered
by neuroscience is simply at odds with this idea.''
''Just as optometrists can tell us how much vision a person has (20/20
or 20/40 or 20/200) but cannot tell us when someone is legally
blind,'' he continues, ''brain scientists might be able to tell us
what someone's mental state or brain condition is but cannot tell us
(without being arbitrary) when someone has too little control to be
held responsible.''
Last year, when the United States Supreme Court heard arguments
against the death penalty for juveniles, the American Medical
Association and other health groups, including psychiatrists and
psychologists, filed briefs arguing that children should not be
treated as adults under the law because in normal brain development
the frontal lobe -- the region of the brain that helps curb impulses
and conduct moral reasoning -- of an adolescent is still immature.
''Neuroscientists should stay in the lab and let lawyers stay in the
courtroom,'' Gazzangia writes.
Moving on to the provocative concept of ''brain privacy,'' Gazzaniga
describes brain fingerprinting -- identifying brain patterns
associated with lying -- and cautions that just like conventional
polygraph tests, these ''much more complex tests . . . are fraught
with uncertainties.'' He also provides perspective on the so-called
bias tests increasingly used in social science and the law, like one
recently described in a Washington Post Magazine article. Subjects
were asked to pair images of black faces with positive or negative
words (''wonderful,'' ''nasty''); if they pressed a computer key to
pair the black face with a positive word several milliseconds more
slowly than they paired it with a negative word, bias was supposed.
The unfortunate headline: ''See No Bias: Many Americans believe they
are not prejudiced. Now a new test provides powerful evidence that a
majority of us really are. Assuming we accept the results, what can we
do about it?''
Nonsense, Gazzaniga would say. Human brains make categories based on
prior experience or cultural assumptions. This is not sinister, it is
normal brain function -- and when experience or assumptions change,
response patterns change. ''It appears that a process in the brain
makes it likely that people will categorize others on the basis of
race,'' he writes. ''Yet this is not the same thing as being racist.''
Nor have split-second reactions like these been convincingly linked to
discrimination in the real world. ''Brains are automatic,
rule-governed, determined devices, while people are personally
responsible agents,'' Gazzaniga says. ''Just as traffic is what
happens when physically determined cars interact, responsibility is
what happens when people interact.''
Clearly, Gazzaniga is not a member of the handwringer school, like
some of his fellow members of the President's Council on Bioethics. At
the same time, his faith in our ability to regulate ourselves is
touching. He notes that sex selection appears to be producing
alarmingly unbalanced ratios of men to women in many countries.
''Tampering with the evolved human fabric is playing with fire,'' he
writes. ''Yet I also firmly believe we can handle it. . . . We humans
are good at adapting to what works, what is good and beneficial, and,
in the end, jettisoning the unwise.''
Gazzaniga looks to the day when neuroethics can derive ''a brain-based
philosophy of life.'' But ''The Ethical Brain'' does not always make
clear how understanding brain mechanisms can help us deal with hard
questions like the status of the embryo or the virtues of prolonging
life well over 100 years. And occasionally the book reads as if
technical detail has been sacrificed for brevity.
A final, speculative section, ''The Nature of Moral Beliefs and the
Concept of Universal Ethics,'' explores whether there is ''an innate
human moral sense.'' The theories of evolutionary psychology point
out, Gazzaniga notes, that ''moral reasoning is good for human
survival,'' and social science has concluded that human societies
almost universally share rules against incest and murder while valuing
family loyalty and truth telling. ''We must commit ourselves to the
view that a universal ethics is possible,'' he concludes. But is such
a commitment important if, as his discussion suggests, we are guided
by a universal moral compass?
Still, ''The Ethical Brain'' provides us with cautions -- prominent
among them that ''neuroscience will never find the brain correlate of
responsibility, because that is something we ascribe to humans -- to
people -- not to brains. It is a moral value we demand of our fellow,
rule-following human beings.'' This statement -- coming as it does
from so eminent a neuroscientist -- is a cultural contribution in
itself.
Sally Satel is a psychiatrist and resident scholar at the American
Enterprise Institute and a co-author of ''One Nation Under Therapy:
How the Helping Culture Is Eroding Self-Reliance.''
---------------
First chapter of 'The Ethical Brain'
http://www.nytimes.com/2005/06/19/books/chapters/0619-1st-gazza.html
By MICHAEL S. GAZZANIGA
Conferring Moral Status on an Embryo
Central to many of the bioethical issues of our time is the question,
When should society confer moral status on an embryo? When should we
call an embryo or a fetus one of us? The fertilized egg represents the
starting point for the soon-to-be dividing entity that will grow into
a fetus and finally into a baby. It is a given that a fertilized egg
is the beginning of the life of an individual. It is also a given that
it is not the beginning of life, since both the egg and the sperm,
prior to uniting, represent life just as any living plant or creature
represents life. Yet is it right to attribute the same moral status to
that human embryo that one attributes to a newborn baby or, for that
matter, to any living human? Bioethicists continue to wrestle with the
question. The implications of determining the beginning of moral
status are far-reaching, affecting abortion, in vitro fertilization,
biomedical cloning, and stem cell research. The rational world is
waiting for resolution of this debate.
This issue shows us how the field of neuroethics goes beyond that of
classic bioethics. When ethical dilemmas involve the nervous system,
either directly or indirectly, those trained in the field of
neuroscience have something to say. They can peek under the lid, as it
were, and help all of us to understand what the actual biological
state is and is not. Is a brain present? Is it functioning in any
meaningful way?
Neuroscientists study the organ that makes us uniquely human-the
brain, that which enables a conscious life. They are constantly
seeking knowledge about what areas of the brain sustain mental
thought, parts of mental thought, or no thought. So at first glance,
it might seem that neuroethicists could determine the moral status of
an embryo or fetus based on the presence of the sort of biological
material that can support mental life and the sort that cannot-in
other words, whether the embryo has a brain that functions at a level
that supports mental activity. Modern brain science is prepared to
answer this question, but while the neurobiology may be clear,
neuroethics runs into problems when it tries to impose rational,
scientific facts on moral and ethical issues.
The Path to Conscious Life
As soon as sperm meets egg, the embryo begins its mission: divide and
differentiate, divide and differentiate, divide and differentiate. The
embryo starts out as the melding of these two cells and must
eventually become the approximately 50 trillion cells that make up the
human organism. There is no time to lose-after only a few hours, three
distinct areas of the embryo are apparent. These areas become the
endoderm, mesoderm, and ectoderm, the initial three layers of cells
that will differentiate to become all the organs and components of the
human body. The layer of the ectoderm gives rise to the nervous
system.
As the embryo continues to grow in the coming weeks, the base of the
portion of the embryo called the neural tube eventually gives rise to
neurons and other cells of the central nervous system, while an
adjacent portion of the embryo called the neural crest eventually
becomes cells of the peripheral nervous system (the nerves outside the
brain and spinal cord). The cavity of the neural tube gives rise to
the ventricles of the brain and the central canal of the spinal cord,
and in week 4 the neural tube develops three distinct bulges that
correspond to the areas that will become the three major divisions of
the brain: forebrain, midbrain, and hindbrain. The early signs of a
brain have begun to form.
Even though the fetus is now developing areas that will become
specific sections of the brain, not until the end of week 5 and into
week 6 (usually around forty to forty-three days) does the first
electrical brain activity begin to occur. This activity, however, is
not coherent activity of the kind that underlies human consciousness,
or even the coherent activity seen in a shrimp's nervous system. Just
as neural activity is present in clinically brain-dead patients, early
neural activity consists of unorganized neuron firing of a primitive
kind. Neuronal activity by itself does not represent integrated
behavior.
During weeks 8 to 10, the cerebrum begins its development in earnest.
Neurons proliferate and begin their migration throughout the brain.
The anterior commissure, which is the first interhemispheric
connection (a small one), also develops. Reflexes appear for the first
time during this period.
The frontal and temporal poles of the brain are apparent during weeks
12 to 16, and the frontal pole (which becomes the neocortex) grows
disproportionately fast when compared with the rest of the cortex. The
surface of the cortex appears flat through the third month, but by the
end of the fourth month indentations, or sulci, appear. (These develop
into the familiar folds of the cerebrum.) The different lobes of the
brain also become apparent, and neurons continue to proliferate and
migrate throughout the cortex. By week 13 the fetus has begun to move.
Around this time the corpus callosum, the massive collection of fibers
(the axons of neurons) that allow for communication between the
hemispheres, begins to develop, forming the infrastructure for the
major part of the cross talk between the two sides of the brain. Yet
the fetus is not a sentient, self-aware organism at this point; it is
more like a sea slug, a writhing, reflex-bound hunk of sensory-motor
processes that does not respond to anything in a directed, purposeful
way. Laying down the infrastructure for a mature brain and possessing
a mature brain are two very different states of being.
Synapses-the points where two neurons, the basic building blocks of
the nervous system, come together to interact-form in large numbers
during the seventeenth and following weeks, allowing for communication
between individual neurons. Synaptic activity underlies all brain
functions. Synaptic growth does not skyrocket until around
postconception day 200 (week 28). Nonetheless, at around week 23 the
fetus can survive outside the womb, with medical support; also around
this time the fetus can respond to aversive stimuli. Major synaptic
growth continues until the third or fourth postnatal month. Sulci
continue to develop as the cortex starts folding to create a larger
surface area and to accommodate the growing neurons and their
supporting glial cells. During this period, neurons begin to myelinate
(a process of insulation that speeds their electrical communication).
By the thirty-second week, the fetal brain is in control of breathing
and body temperature.
By the time a child is born, the brain largely resembles that of an
adult but is far from finished with development. The cortex will
continue to increase in complexity for years, and synapse formation
will continue for a lifetime.
The Arguments
That is the quick and easy neurobiology of fetal brain development.
The embryonic stage reveals that the fertilized egg is a clump of
cells with no brain; the processes that begin to generate a nervous
system do not begin until after the fourteenth day. No sustainable or
complex nervous system is in place until approximately six months of
gestation.
The fact that it is clear that a human brain isn't viable until week
23, and only then with the aid of modern medical support, seems to
have no impact on the debate. This is where neuro "logic" loses out.
Moral arguments get mixed in with biology, and the result is a stew of
passions, beliefs, and stubborn, illogical opinion. Based on the
specific question being asked, I myself have different answers about
when moral status should be conferred on a fetus. For instance,
regarding the use of embryos for biomedical research, I find the
fourteen-day cutoff employed by researchers to be a completely
acceptable practice. However, in judging a fetus "one of us," and
granting it the moral and legal rights of a human being, I put the age
much later, at twenty-three weeks, when life is sustainable and that
fetus could, with a little help from a neonatal unit, survive and
develop into a thinking human being with a normal brain. This is the
same age at which the Supreme Court has ruled that the fetus becomes
protected from abortion.
As a father, I have a perceptual reaction to the Carnegie
developmental stages of a fetus: the image of Stage 23, when the fetus
is approximately eight weeks old, suggests a small human being. Until
that stage, it is difficult to tell the difference between a pig
embryo and a human embryo. But then-bingo-up pops the beginning shape
of the human head, and it looks unmistakably like one of us. Again,
this is around eight weeks, more than two thirds into the first
trimester. I am reacting to a sentiment that wells up in me, a
perceptual moment that is stark, defining, and real. And yet, at the
level of neuroscientific knowledge, it could easily be argued that my
view is nonsensical. The brain at Carnegie Stage 23, which has slowly
been developing from roughly the fifteenth day, is hardly a brain that
could sustain any serious mental life. If a grown adult had suffered
massive brain damage, reducing the brain to this level of development,
the patient would be considered brain dead and a candidate for organ
donation. Society has defined the point at which an inadequately
functioning brain no longer deserves moral status. If we look at the
requirements for brain death, and examine how they compare with the
developmental sequence, we see that the brain of a third-trimester
baby, or perhaps even a second-trimester baby, could be so analyzed.
So why would I draw a line at Carnegie Stage 23 when the
neuroscientific knowledge makes it clear that the brain at this stage
is not ready for prime-time life?
I am trying to make a neuroethical argument here, and I cannot avoid a
"gut reaction." Of course, it is my gut reaction, and others may not
have it at all. In recognizing it within me, however, I am able to
appreciate how difficult these decisions are for many people. Even
though I can't imagine, and do not have, a gut reaction to seeing a
fourteen-day-old blastocyst, an entity the size of the dot of an i on
this page, that dot may serve as a stimulus to the belief system of
those who hold that all fertilized eggs are worthy of our respect.
Still, I would argue that assigning equivalent moral status to a
fourteen-day-old ball of cells and to a premature baby is conceptually
forced. Holding them to be the same is a sheer act of personal belief.
The Continuity and Potentiality Arguments
There is, they argue, no clear place to draw a line after the
earliest formation of the organism, and so there can be no stark
division between the moral standing of nascent human life and that
of more mature individuals. -From Monitoring Stem Cell Research,
the President's Council on Bioethics, 2004
Obviously there is a point of view that life begins at conception. The
continuity argument is that a fertilized egg will go on to become a
person and therefore deserves the rights of an individual, because it
is unquestionably where a particular individual's life begins. If one
is not willing to parse the subsequent events of development, then
this becomes one of those arguments you can't argue with. Either you
believe it or you don't. While those who argue this point try to
suggest that anyone who values the sanctity of human life must see
things this way, the fact is that this just isn't so. This view comes,
to a large extent, from the Catholic Church, the American religious
right, and even many atheists and agnostics. On the other side, Jews,
Muslims, Hindus, many Christians, and other atheists and agnostics do
not believe it. Certain Jews and Muslims believe that the embryo
deserves to be assigned the moral status of a "human" after forty days
of development. Many Catholics believe the same, and many have written
to me expressing those views based on their own reading of church
history.
When we examine the issue of brain death, that is when life ends, it
also begins to become clear that something else is at work here: our
own brain's need to form beliefs. If we examine how a common set of
accepted rational, scientific facts can lead to different moral
judgments, we see the need to consider what influences these varying
conclusions, and we can begin to extricate certain neuroethical issues
from the arbitrary contexts in which they may initially have been
considered.
Different cultures view brain death differently. Brain death is
declared medically when a patient is in an irreversible coma due to
brain injury-from a stroke, for example-and has no brain stem
response, leading to a flat EEG (that is, no sign of brain activity on
an electroencephalography recording), or ability to breathe
independently. A survey published in the journal Neurology in 2000
compared worldwide standards and regulations for declaring brain
death. The concept of brain death is accepted worldwide: even in the
most religious societies no one argues that human life continues to
exist when the brain is irreversibly unable to function. What differs
is the procedure for determining brain death. And these societal
differences reveal how bioethical practices and laws can vary so
wildly, for reasons that have nothing to do with science but instead
are based on politics, religion, or, in most cases, the differing
personal beliefs of a task force. For instance, China has no
standards, while Hong Kong has well-defined criteria-left over, no
doubt, from its having been under the rule of the United Kingdom. The
Republic of Georgia requires that a doctor with five years of
neuroscience practice determine brain death; this is not so in Russia.
Iran requires the greatest number of observations-at twelve,
twenty-four, and thirty-six hours-with three physicians; and in the
United States, several states have adapted the Uniform Definition of
Death Act, including New York and New Jersey, both of which have a
religious-objections loophole.
The example of brain death illustrates how rules and regulations on
bioethical issues can be formed and influenced by beliefs that have
nothing to do with the accepted scientific facts. No one debates that
a line has been crossed when the loss of brain function is such that
life ceases. What we differ on isn't even when that line should be
drawn-most countries have similar definitions of brain death. What
differs is largely who makes the call and what tests are
used-differences, basically, in how you know when you get there, not
where "there" is.
So, too, we all seem to be in agreement that there must be a point at
which moral status should be conferred on an embryo or fetus. However,
we seem to have a harder time defining that point, regardless of the
facts. . . .
More information about the paleopsych
mailing list